WorldWideScience

Sample records for thermocouple evaluation model

  1. Thermocouple modeling

    International Nuclear Information System (INIS)

    Fryer, M.O.

    1984-01-01

    The temperature measurements provided by thermocouples (TCs) are important for the operation of pressurized water reactors. During severe inadequate core cooling incidents, extreme temperatures may cause type K thermocouples (TCs) used for core exit temperature monitoring to perform poorly. A model of TC electrical behavior has been developed to determine how TCs react under extreme temperatures. The model predicts the voltage output of the TC and its impedance. A series of experiments were conducted on a length of type K thermocouple to validate the model. Impedance was measured at several temperatures between 22 0 C and 1100 0 C and at frequencies between dc and 10 MHz. The model was able to accurately predict impedance over this wide range of conditions. The average percentage difference between experimental data and the model was less than 6.5%. Experimental accuracy was +-2.5%. There is a sriking difference between impedance versus frequency plots at 300 0 C and at higher temperatures. This may be useful in validating TC data during accident conditions

  2. Thermocouple evaluation model and evaluation of chromel--alumel thermocouples for High-Temperature Gas-Cooled Reactor applications

    International Nuclear Information System (INIS)

    Washburn, B.W.

    1977-03-01

    Factors affecting the performance and reliability of thermocouples for temperature measurements in High-Temperature Gas-Cooled Reactors are investigated. A model of an inhomogeneous thermocouple, associated experimental technique, and a method of predicting measurement errors are described. Error drifts for Type K materials are predicted and compared with published stability measurements. 60 references

  3. Thermocouple

    International Nuclear Information System (INIS)

    Charlesworth, F.D.W.

    1983-01-01

    A thermocouple is provided by a cable of coaxial form with inner and outer conductors of thermocouple forming materials and with the conductors electrically joined together at one end of the cable to form the thermocouple junction. The inner and outer conductors are preferably of chromel and stainless steel respectively. (author)

  4. Evaluation of RTD and thermocouple for PID temperature control in ...

    African Journals Online (AJOL)

    Evaluation of RTD and thermocouple for PID temperature control in distributed control system laboratory. D. A. A. Nazarudin, M. K. Nordin, A. Ahmad, M. Masrie, M. F. Saaid, N. M. Thamrin, M. S. A. M. Ali ...

  5. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    Science.gov (United States)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  6. Evaluation of thermocouple fin effect in cladding surface temperature measurement during film boiling

    International Nuclear Information System (INIS)

    Tsuruta, Takaharu; Fujishiro, Toshio

    1984-01-01

    Thermocouple fin effect on surface temperature measurement of a fuel rod has been studied at elevated wall temperatures under film boiling condition in a reactivity initiated accident (RIA) situation. This paper presents an analytical equation to evaluate temperature drops caused by the thermocouple wires attached to cladding surface. The equation yielded the local temperature drop at measuring point depending on thermocouple diameter, cladding temperature, coolant flow condition and vapor film thickness. The temperature drops by the evaluating equation were shown in cases of free and forced convection conditions. The analytical results were compared with the measured data for various thermocouple sizes, and also with the estimated maximum cladding temperature based on the oxidation layer thickness in the cladding outer surface. It was concluded that the temperature drops at above 1,000 0 C in cladding temperature were around 120 and 150 0 C for 0.2 and 0.3 mm diameter Pt-Pt.Rh thermocouples, respectively, under a stagnant coolant condition. The fin effect increases with the decrease of vapor film thickness such as under forced flow cooling or at near the quenching point. (author)

  7. Composite thermocouples

    International Nuclear Information System (INIS)

    Debeir, R.P.

    1975-01-01

    As a rule, a composite thermocouple is a thermocouple where one or more components (wires, sheath, insulation) differ in kind between the hot junction measurement point and the cold termination with ordinary cables going on to measurement instrumentation. Three categories of such thermocouples are discussed: composite thermocouples having in common the continuity of the thermoelement wires over complete length, and different sheaths and insulation for the high temperature and intermediate temperature parts; those with different thermoelement wires, sheaths, and insulators for the high and intermediate temperature parts; a third category includes the high temperature thermoelements insulated by Al 2 O 3 or BeO and sheathed with a refractory metal, and with the intermediate temperature part made of 2Cr-Al couples, MgO insulated, and stainless steel or inconel sheathed

  8. An Explicit Approach Toward Modeling Thermo-Coupled Deformation Behaviors of SMPs

    Directory of Open Access Journals (Sweden)

    Hao Li

    2017-03-01

    Full Text Available A new elastoplastic J 2 -flow models with thermal effects is proposed toward simulating thermo-coupled finite deformation behaviors of shape memory polymers. In this new model, an elastic potential evolving with development of plastic flow is incorporated to characterize the stress-softening effect at unloading and, moreover, thermo-induced plastic flow is introduced to represent the strain recovery effect at heating. It is shown that any given test data for both effects may be accurately simulated by means of direct and explicit procedures. Numerical examples for model predictions compare well with test data in literature.

  9. Structural evaluation of thermocouple probes for 241-AZ-101 waste tank

    International Nuclear Information System (INIS)

    Kanjilal, S.K.

    1994-01-01

    This document reports on the structural analysis of the thermocouple probe to be installed in 241-AZ-101 waste tank. The thermocouple probe is analyzed for normal pump mixing operation and potential earthquake induced loads required by the Hanford Site Design Criteria SDC-4.1

  10. Structural evaluation of thermocouple probes for 241-AZ-101 waste tank

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, S.K.

    1994-12-06

    This document reports on the structural analysis of the thermocouple probe to be installed in 241-AZ-101 waste tank. The thermocouple probe is analyzed for normal pump mixing operation and potential earthquake induced loads required by the Hanford Site Design Criteria SDC-4.1.

  11. A Simple Test to Evaluate the Calibration Stability and Accuracy of Infrared Thermocouple Sensors

    OpenAIRE

    Pinnock, Derek R.; Bugbee, Bruce

    2002-01-01

    Accurately measuring surface temperature is not difficult when the surface, the sensor, and air temperatures are similar, but it is challenging when the surface temperature is significantly different than air and sensor temperatures. We tested three Infrared Thermocouple sensors (IRT’s) that had been used for two years in a greenhouse environment. The importance of the correction for sensor body temperature was also examined.

  12. Comparative evaluation of corrosion behaviour of type K thin film thermocouple and its bulk counterpart

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Barhai, P.K.; Srikanth, S.

    2011-01-01

    Highlights: → Anodic vacuum arc deposited chromel and alumel films are more 'noble' in 5% NaCl solution than their respective wires. → Chromel undergoes localised corrosion while alumel shows uniform corrosion. → Virgin samples of chromel-alumel TFTCs exhibit good thermoelectric response. → Their thermoelectric outputs remain largely unaffected when shelved under normal atmospheric conditions. → After 288 h of exposure in salt spray environment, their thermoelectric outputs show noticeable change due to size effects. - Abstract: This paper investigates the corrosion behaviour of type K thermoelements and their thin films, and compares the performance of chromel-alumel thin film thermocouple with its wire counterpart before and after exposure to 5% NaCl medium. Potentiodynamic polarisation tests reveal that chromel and alumel films are more 'noble' than their respective wires. Alumel corrodes faster when coupled with chromel in films than as wires. Secondary electron micrographs and electrochemical impedance spectroscopy measurements suggest that chromel shows localised corrosion while alumel undergoes uniform corrosion. Corrosion adversely affects the thermocouple output and introduces an uncertainty in the measurement.

  13. Thin film ceramic thermocouples

    Science.gov (United States)

    Gregory, Otto (Inventor); Fralick, Gustave (Inventor); Wrbanek, John (Inventor); You, Tao (Inventor)

    2011-01-01

    A thin film ceramic thermocouple (10) having two ceramic thermocouple (12, 14) that are in contact with each other in at least on point to form a junction, and wherein each element was prepared in a different oxygen/nitrogen/argon plasma. Since each element is prepared under different plasma conditions, they have different electrical conductivity and different charge carrier concentration. The thin film thermocouple (10) can be transparent. A versatile ceramic sensor system having an RTD heat flux sensor can be combined with a thermocouple and a strain sensor to yield a multifunctional ceramic sensor array. The transparent ceramic temperature sensor that could ultimately be used for calibration of optical sensors.

  14. Travelling gradient thermocouple calibration

    International Nuclear Information System (INIS)

    Broomfield, G.H.

    1975-01-01

    A short discussion of the origins of the thermocouple EMF is used to re-introduce the idea that the Peltier and Thompson effects are indistinguishable from one another. Thermocouples may be viewed as devices which generate an EMF at junctions or as integrators of EMF's developed in thermal gradients. The thermal gradient view is considered the more appropriate, because of its better accord with theory and behaviour, the correct approach to calibration, and investigation of service effects is immediately obvious. Inhomogeneities arise in thermocouples during manufacture and in service. The results of travelling gradient measurements are used to show that such effects are revealed with a resolution which depends on the length of the gradient although they may be masked during simple immersion calibration. Proposed tests on thermocouples irradiated in a nuclear reactor are discussed

  15. Simulation of a welding process in polyduct pipelines resolved with a finite elements computational model. Comparison with analytical solutions and tests with thermocouples

    International Nuclear Information System (INIS)

    Sanzi, H; Elvira, G; Kloster, M; Asta, E; Zalazar, M

    2006-01-01

    All welding processes induce deformations and thermal tensions, which must be evaluated correctly since they can influence a component's structural integrity. This work determines the distribution of temperatures that develop during a manual welding process with shielded electrodes (SMAW), on the circumference seam of a pipe for use in polyducts. A simplified model of Finite Elements (FEA) using three dimensional solids is proposed for the study. The analysis considers that while the welding process is underway, no heat is lost into the environment, that is, adiabatic conditions are considered, and the transformations produced in the material due to phase changes do not produce modifications in the properties of the supporting or base materials. The results of the simulation are compared with those obtained by recent analytical studies developed by different investigators, such as Nguyen, Ohta, Matsuoka, Suzuki and Taeda, where a continuously moving three dimensional double ellipsoidal source was used. The results are then compared with the experimental results by measuring with thermocouples. This study reveals the sensitivity and the validity of the proposed computer model, and in a second stage optimizes the engineering times for the resolution of a problem like the one presented in order to design the corresponding welding procedure (CW)

  16. Temperature measurements by thermocouples

    International Nuclear Information System (INIS)

    Liermann, J.

    1975-01-01

    The measurement of a temperature (whatever the type of transducer used) raises three problems: the choice of transducer; where it should be placed; how it should be fixed and protected. These are the three main points examined, after a brief description of the most commonly used thermocouples [fr

  17. Construção e avaliação de psicrômetro aspirado de termopar Construction and evaluation of an aspirated thermocouple psychrometer

    Directory of Open Access Journals (Sweden)

    Fábio Ricardo Marin

    2001-12-01

    Full Text Available Construiu-se um psicrômetro de termopar aspirado, de baixo custo e fácil utilização em sistemas automáticos de aquisição de dados, utilizando-se tubos de PVC. A aspiração foi feita por ventiladores utilizados em microcomputadores e as temperaturas foram determinadas com junções de termopar de cobre-constantan. Para umidecimento do bulbo, utilizou-se um cordão de algodão. Os resultados da comparação com higrômetro capacitivo Vaisala Inc. e com psicrômetro aspirado tipo Assman mostraram que tanto em ambientes naturais como em controlados, a precisão e a exatidão das medidas foi muito boa, de maneira que o psicrômetro aqui descrito pode ser empregado para determinação da pressão atual de vapor e da umidade relativa sem perda de qualidade dos dados, e também em estudos que levem em conta gradientes de temperatura e umidade específica.The construction of a low cost aspirated thermocouple psychrometer made of PVC tubes is described. The instrument can easily be connected to dataloggers. The aspiration is made by fans used in microcomputers and temperatures measured with cooper-constantan thermocouples. A cotton string was used to make the wet junction. Its perfomance was evaluated in comparison to an Assman aspirated psychrometer and a Vaisala Inc. capacitive higrometer, in natural and controlled environments. The results show a good agreement between measures, allowing air vapour, relative humidity, temperature and specific humidity gradients to be determined using the proposed psychrometer.

  18. Accounting for the inertia of the thermocouples' measurements by modelling of a NPP Kalinin-3 transient with the coupled system code ATHLET-BIPR-VVER

    International Nuclear Information System (INIS)

    Nikonov, S.; Velkov, K.

    2008-01-01

    The ATHLET-BIPR-VVER coupled system code is applied for performing of safety analysis for different WWER reactors. During the last years its validation matrix is continuously being enlarged. The measurements performed during the commissioning phase of NPP Kalinin Unit 3 for the transient 'Switching-off of one Main Circulation Pump at nominal power' are very well documented and have a variety of recorded integral and local thermo-hydraulic and neutron-physic parameters including the measurements' errors. This data is being used for further validation of the coupled code system ATHLET-BIPR-VVER. In the paper are discussed the problems and our solutions by the correct interpretation of the measured thermocouples' records at NPP Kalinin-3 and the comparison with the predicted results by the coupled thermal-hydraulic/neutron-kinetic code ATHLET-BIPR-VVER. Of primary importance by such comparisons is the correct accounting of the fluid mixing process that take place in the surrounding of the measuring sensors and also the consideration of the time delay (inertia term) of the measuring devices. On the bases of previous experience and many simulations of the defined transient a method is discussed and proposed to consider correctly the inertia term of the thermocouples' measurements. The new modelling is implemented in the coupled system code ATHLET-BIPR-VVER for further validation. (Author)

  19. Noncontacting Measurement With a Thermocouple

    Science.gov (United States)

    Weatherill, W. T.; Schoreder, C. J.; Freitag, H. J.

    1986-01-01

    Tentlike covering brings thermocouple to within few degrees of surface temperature. Technique originally developed for measuring surface temperature of quartz fabric under radiant heating requires no direct contact with heated surface. Technique particularly useful when measuring surface temperatures of materials damaged if thermocouple or other temperature sensor attached.

  20. Measurement errors for thermocouples attached to thin plates

    International Nuclear Information System (INIS)

    Sobolik, K.B.; Keltner, N.R.; Beck, J.V.

    1989-01-01

    This paper discusses Unsteady Surface Element (USE) methods which are applied to a model of a thermocouple wire attached to a thin disk. Green's functions are used to develop the integral equations for the wire and the disk. The model can be used to evaluate transient and steady state responses for many types of heat flux measurement devices including thin skin calorimeters and circular foil (Gardon) head flux gauges. The model can accommodate either surface or volumetric heating of the disk. The boundary condition at the outer radius of the disk can be either insulated or constant temperature. Effect on the errors of geometrical and thermal factors can be assessed. Examples are given

  1. Development of a micro-thermal flow sensor with thin-film thermocouples

    Science.gov (United States)

    Kim, Tae Hoon; Kim, Sung Jin

    2006-11-01

    A micro-thermal flow sensor is developed using thin-film thermocouples as temperature sensors. A micro-thermal flow sensor consists of a heater and thin-film thermocouples which are deposited on a quartz wafer using stainless steel masks. Thin-film thermocouples are made of standard K-type thermocouple materials. The mass flow rate is measured by detecting the temperature difference of the thin-film thermocouples located in the upstream and downstream sections relative to a heater. The performance of the micro-thermal flow sensor is experimentally evaluated. In addition, a numerical model is presented and verified by experimental results. The effects of mass flow rate, input power, and position of temperature sensors on the performance of the micro-thermal flow sensor are experimentally investigated. At low values, the mass flow rate varies linearly with the temperature difference. The linearity of the micro-thermal flow sensor is shown to be independent of the input power. Finally, the position of the temperature sensors is shown to affect both the sensitivity and the linearity of the micro-thermal flow sensor.

  2. PWR thermocouple mechanical sealing structure

    International Nuclear Information System (INIS)

    Shen Qiuping; He Youguang

    1991-08-01

    The PWR in-core temperature detection device, which is one of measures to insure reactor safety operation, is to monitor and diagnose reactor thermal power output and in-core power distribution. The temperature detection device system uses thermocouples as measuring elements with stainless steel protecting sleeves. The thermocouple has a limited service time and should be replaced after its service time has reached. A new sealing device for the thermocouples of reactor in-core temperature detection system has been developed to facilitate replacement. The structure is complete tight under high temperature and pressure without any leakage and seepage, and easy to be assembled or disassembled in radioactive environment. The device is designed to make it possible to replace the thermocouple one by one if necessary. This is a new, simple and practical structure

  3. AGR-1 Thermocouple Data Analysis

    International Nuclear Information System (INIS)

    Einerson, Jeff

    2012-01-01

    This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R and D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods to further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of

  4. AGR-1 Thermocouple Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jeff Einerson

    2012-05-01

    This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods to further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of

  5. Long duration performance of high temperature irradiation resistant thermocouples

    International Nuclear Information System (INIS)

    Rempe, J.; Knudson, D.; Condie, K.; Cole, J.; Wilkins, S.C.

    2007-01-01

    Many advanced nuclear reactor designs require new fuel, cladding, and structural materials. Data are needed to characterize the performance of these new materials in high temperature, radiation conditions. However, traditional methods for measuring temperature in-pile degrade at temperatures above 1100 C degrees. To address this instrumentation need, the Idaho National Laboratory (INL) developed and evaluated the performance of a high temperature irradiation-resistant thermocouple that contains alloys of molybdenum and niobium. To verify the performance of INL's recommended thermocouple design, a series of high temperature (from 1200 to 1800 C) long duration (up to six months) tests has been initiated. This paper summarizes results from the tests that have been completed. Data are presented from 4000 hour tests conducted at 1200 and 1400 C that demonstrate the stability of this thermocouple (less than 2% drift). In addition, post test metallographic examinations are discussed which confirm the compatibility of thermocouple materials throughout these long duration, high temperature tests. (authors)

  6. Temperature monitoring device and thermocouple assembly therefor

    Science.gov (United States)

    Grimm, Noel P.; Bauer, Frank I.; Bengel, Thomas G.; Kothmann, Richard E.; Mavretish, Robert S.; Miller, Phillip E.; Nath, Raymond J.; Salton, Robert B.

    1991-01-01

    A temperature monitoring device for measuring the temperature at a surface of a body, composed of: at least one first thermocouple and a second thermocouple; support members supporting the thermocouples for placing the first thermocouple in contact with the body surface and for maintaining the second thermocouple at a defined spacing from the body surface; and a calculating circuit connected to the thermocouples for receiving individual signals each representative of the temperature reading produced by a respective one of the first and second thermocouples and for producing a corrected temperature signal having a value which represents the temperature of the body surface and is a function of the difference between the temperature reading produced by the first thermocouple and a selected fraction of the temperature reading provided by the second thermocouple.

  7. Spatial filtring and thermocouple spatial filter

    International Nuclear Information System (INIS)

    Han Bing; Tong Yunxian

    1989-12-01

    The design and study on thermocouple spatial filter have been conducted for the flow measurement of integrated reactor coolant. The fundamental principle of spatial filtring, mathematical descriptions and analyses of thermocouple spatial filter are given

  8. Thermo-coupled Surface Cauchy-Born Theory: An Engineering Finite Element Approach to Modeling of Nanowire Thermomechanical Response

    DEFF Research Database (Denmark)

    Esfahania, M. Nasr; Sonne, Mads Rostgaard; Hattel, J. Henri

    2016-01-01

    of thermal and mechanical stresses is achieved by eliminating the diagonalization matrix of entropy in the quasiharmonic system. This leads to a reduction in the degrees of freedom by more than 99} in comparison with equivalent Molecular Dynamics models. For the purpose of validation, results obtained...

  9. A method to eliminate the effect of radiation on thermocouple performance

    International Nuclear Information System (INIS)

    Ali, Fawaz; Lu Lixuan

    2007-01-01

    In-core temperature measurements are pivotal in maintaining nuclear reactors in a safe state of operation. Thermocouples serve as the liaison in ensuring this safe state. The realization of the thermocouple's full potential is hindered by the fact that thermocouples cannot be situated in areas with high radiation fields. Radiation has the potential of generating voltages in the thermocouple wires, hence producing an error in the temperature transmitter output. In this paper, a mathematical model is developed to quantify the effect that radiation from the Canada Deuterium Uranium (CANDU) Nuclear Power Plants (NPPs) has on the thermocouple temperature reading. Subsequently, a method to offset the effect of radiation on the thermocouple is proposed. Simulation is performed to verify the effectiveness of the proposed system

  10. Relative humidity measurements with thermocouple psychrometer and capacitance sensors

    International Nuclear Information System (INIS)

    Mao, Naihsien.

    1991-01-01

    The relative humidity is one of the important hydrological parameters affecting waste package performance. Water potential of a system is defined as the amount of work required to reversibly and isothermally move an infinitesimal quantity of water from a pool of pure water to that system at the same elevation. The thermocouple psychrometer, which acts as a wet-dry bulb instrument based on the Peltier effect, is used to measure water potential. The thermocouple psychrometer works only for relative humidity greater than 94 percent. Other sensors must be used for drier conditions. Hence, the author also uses a Vaisala Humicap, which measures the capacitance change due to relative humidity change. The operation range of the Humicap (Model HMP 135Y) is from 0 to 100 percent relative humidity and up to 160C (320F) in temperature. A psychrometer has three thermocouple junctions. Two copper-constantan junctions serve as reference temperature junctions and the constantan-chromel junction is the sensing junction. Current is passed through the thermocouple causing cooling of the sensing junction by the Peltier effect. When the temperature of the junction is below the dew point, water will condense upon the junction from the air. The Peltier current is discontinued and the thermocouple output is recorded as the temperature of the thermocouple returns to ambient. The temperature changes rapidly toward the ambient temperature until it reaches the wet bulb depression temperature. At this point, evaporation of the water from the junction produces a cooling effect upon the junction that offsets the heat absorbed from the ambient surroundings. This continues until the water is depleted and the thermocouple temperature returns to the ambient temperature (Briscoe, 1984). The datalogger starts to take data roughly at the wet bulb depression temperature

  11. LOFT small break test thermocouple installation

    International Nuclear Information System (INIS)

    Fors, R.M.

    1980-01-01

    The subject thermocouple design has been analyzed for maximum expected hydraulic loading and found to be adequate. The natural frequency of the thermocouple was found to be between the vortex shedding frequencies for the gas and liquid phase so that a tendency for resonance will exist. However, since the thermocouple support will have a restricted displacement, stresses found are below the endurance limit and, thus, are acceptable in respect to fatigue life as well as primary stress due to pressure loading

  12. Novel thermocouples for automotive applications

    Directory of Open Access Journals (Sweden)

    P. Gierth

    2018-02-01

    Full Text Available Measurement of temperatures in engine and exhaust systems in automotive applications is necessary for thermal protection of the parts and optimizing of the combustion process. State-of-the-art temperature sensors are very limited in their response characteristic and installation space requirement. Miniaturized sensor concepts with a customizable geometry are needed. The basic idea of this novel sensor concept is to use thick-film technology on component surfaces. Different standardized and especially nonstandard material combinations of thermocouples have been produced for the validation of this technology concept. Application-oriented measurements took place in the exhaust system of a test vehicle and were compared to standard laboratory conditions.

  13. Attaching Thermocouples by Peening or Crimping

    Science.gov (United States)

    Murtland, Kevin; Cox, Robert; Immer, Christopher

    2006-01-01

    Two simple, effective techniques for attaching thermocouples to metal substrates have been devised for high-temperature applications in which attachment by such conventional means as welding, screws, epoxy, or tape would not be effective. The techniques have been used successfully to attach 0.005- in. (0.127-mm)-diameter type-S thermocouples to substrates of niobium alloy C-103 and stainless steel 416 for measuring temperatures up to 2,600 F (1,427 C). The techniques are equally applicable to other thermocouple and substrate materials. In the first technique, illustrated in the upper part of the figure, a hole slightly wider than twice the diameter of one thermocouple wire is drilled in the substrate. The thermocouple is placed in the hole, then the edge of the hole is peened in one or more places by use of a punch (see figure). The deformed material at the edge secures the thermocouple in the hole. In the second technique a hole is drilled as in the first technique, then an annular relief area is machined around the hole, resulting in structure reminiscent of a volcano in a crater. The thermocouple is placed in the hole as in the first technique, then the "volcano" material is either peened by use of a punch or crimped by use of sidecutters to secure the thermocouple in place. This second technique is preferable for very thin thermocouples [wire diameter .0.005 in. (.0.127 mm)] because standard peening poses a greater risk of clipping one or both of the thermocouple wires. These techniques offer the following advantages over prior thermocouple-attachment techniques: . Because these techniques involve drilling of very small holes, they are minimally invasive . an important advantage in that, to a first approximation, the thermal properties of surrounding areas are not appreciably affected. . These techniques do not involve introduction of any material, other than the substrate and thermocouple materials, that could cause contamination, could decompose, or oxidize

  14. Thermocouple pressure bushing in suspension rod

    International Nuclear Information System (INIS)

    Pasek, J.; Ondreicka, K.

    1975-01-01

    The seal is described of jacket thermocouples located in the pressure reducer in the fuel element suspension rod. The thermocouples are sealed in the pressure reducer with a silicon sealing compound. The sealing compound is compressed between the two reducers with a Bellevile spring and a pressure washer secured in position with a spring. The axial pressure of the inner parts of the reducer on the compound is adjustable by means of a thrust screw. The tightness and alignment of the thermocouples in the pressure reducer is achieved by tightening the thrust screw to the stop of the top reducer and the subsequent setting of the sealing compound. (J.B.)

  15. The transient response for different types of erodable surface thermocouples using finite element analysis

    Directory of Open Access Journals (Sweden)

    Mohammed Hussein

    2007-01-01

    Full Text Available The transient response of erodable surface thermocouples has been numerically assessed by using a two dimensional finite element analysis. Four types of base metal erodable surface thermocouples have been examined in this study, included type-K (alumel-chromel, type-E (chromel-constantan, type-T (copper-constantan, and type-J (iron-constantan with 50 mm thick- ness for each. The practical importance of these types of thermocouples is to be used in internal combustion engine studies and aerodynamics experiments. The step heat flux was applied at the surface of the thermocouple model. The heat flux from the measurements of the surface temperature can be commonly identified by assuming that the heat transfer within these devices is one-dimensional. The surface temperature histories at different positions along the thermocouple are presented. The normalized surface temperature histories at the center of the thermocouple for different types at different response time are also depicted. The thermocouple response to different heat flux variations were considered by using a square heat flux with 2 ms width, a sinusoidal surface heat flux variation width 10 ms period and repeated heat flux variation with 2 ms width. The present results demonstrate that the two dimensional transient heat conduction effects have a significant influence on the surface temperature history measurements made with these devices. It was observed that the surface temperature history and the transient response for thermocouple type-E are higher than that for other types due to the thermal properties of this thermocouple. It was concluded that the thermal properties of the surrounding material do have an impact, but the properties of the thermocouple and the insulation materials also make an important contribution to the net response.

  16. Characteristics of metal sheathed thermocouples in thermowell

    International Nuclear Information System (INIS)

    Okuda, Takehiro; Nakase, Tsuyoshi; Tanabe, Yutaka; Yamada, Kunitaka; Yoshizaki, Akio; Roko, Kiyokazu

    1987-01-01

    Static and dynamic characteristics of thermowell type thermocouples which are planned to be used for the High-Temperature engineering Test Reactor (HTTR) have been investigated. A mock-up test section was installed in Kawasaki's Helium Test Loop (KH-200). Thermal characteristics tests were carried out under the 600 ∼ 1000 deg C temperature conditions. The test section was equipped with four types sheathed thermocouples; the well type, the non well type, and ones with and without the thermal radiation shielding plate. The measured temperature by the well type thermocouples with the shielding plate was only about 1.3 deg C higher than the one without the shielding plate at gas temperature 990 deg C. The measured time constant of the well type thermocouples was about 7 seconds in the condition of the heat transfer coefficient 1600 Kcal/m 2 h deg C on the well surface, and coincided with the calculated one by ''TRUMP'' code. (author)

  17. Operating problems of the thermocouples in VVER

    International Nuclear Information System (INIS)

    Timonin, A.S.

    1997-01-01

    In WWER reactors, the coolant temperature at the outlet of the majority of fuel assemblies is measured with chromel-alumel cable thermocouples. The components of systematic errors in temperature measurements are discussed. Errors due to calibration drift can be avoided by periodical calibrations performed during the heating and hot test runs after reactor refueling. Errors due to radiation heating and response time can be estimated and thus eliminated. Errors due to flow stratification of the coolant can also be eliminated by an estimation of correction factors. The effects of the aging of the thermocouples are also discussed. The removal of thermocouples from their coverings for replacement presents some difficulties, which thus determine the service life of the thermocouples. (A.K.)

  18. Estimation of radiation losses from sheathed thermocouples

    International Nuclear Information System (INIS)

    Roberts, I.L.; Coney, J.E.R.; Gibbs, B.M.

    2011-01-01

    Thermocouples are often used for temperature measurements in heat exchangers. However if the radiation losses from a thermocouple in a high temperature gas flow to colder surroundings are ignored significant errors can occur. Even at moderate temperature differences, these can be significant. Prediction of radiation losses from theory can be problematic, especially in situations where there are large variations in the measured temperatures as the emissivity and radiative heat transfer coefficient of the thermocouple are not constant. The following approach combines experimental results with established empirical relationships to estimate losses due to radiation in an annular heat exchanger at temperatures up to 950 o C. - Highlights: → Sheathed thermocouples are often used to measure temperatures in heat exchangers. → Errors are introduced if radiation losses are ignored. → Radiation losses are environment specific and may be significant. → Experimental and theoretical methods are used to estimate losses. → Hot side maximum temperature 950 o C.

  19. Self-adapted thermocouple-diagnostic complex

    International Nuclear Information System (INIS)

    Alekseev, S.V.; Grankovskij, K.Eh.; Olejnikov, P.P.; Prijmak, S.V.; Shikalov, V.F.

    2003-01-01

    A self-adapted thermocouple-diagnostic complex (STDC) for obtaining the reliable data on the coolant temperature in the reactors of NPP is described. The STDC in based on the thermal pulse monitoring of a thermocouple in the measuring channel of a reactor. Measurement method and STDC composition are substantiated. It is shown that introduction of the developed STDC ensures realization of precise and reliable temperature monitoring in the reactors of all types [ru

  20. Embedded cladding surface thermocouples on Zircaloy-sheathed heater rods

    International Nuclear Information System (INIS)

    Wilkins, S.C.

    1977-06-01

    Titanium-sheathed Type K thermocouples embedded in the cladding wall of zircaloy-sheathed heater rods are described. These thermocouples constitute part of a program intended to characterize the uncertainty of measurements made by surface-mounted cladding thermocouples on nuclear fuel rods. Fabrication and installation detail, and laboratory testing of sample thermocouple installations are included

  1. Adhesive-Bonded Tab Attaches Thermocouples to Titanium

    Science.gov (United States)

    Cook, C. F.

    1982-01-01

    Mechanical strength of titanium-alloy structures that support thermocouples is preserved by first spotwelding thermocouples to titanium tabs and then attaching tabs to titanium with a thermosetting adhesive. In contrast to spot welding, a technique previously used for thermocouples, fatigue strength of the titanium is unaffected by adhesive bonding. Technique is also gentler than soldering or attaching thermocouples with a tap screw.

  2. Study on thermocouple attachment in reflood experiments

    International Nuclear Information System (INIS)

    Sugimoto, Jun

    1977-03-01

    The method of thermocouple attachment to a heater rods has been studied for surface temperature measurement in reflood experiments. The method used as far in JAERI's reflood experiments had some possibilities of not estimating exactly the quench times. Various attachment method have been tested and some proved to be effective in the respect. (auth.)

  3. A thermocouple thermometry system for ultrasound hyperthermia

    International Nuclear Information System (INIS)

    Ozarka, M.; Gharakhani, A.; Magin, R.; Cain, C.

    1984-01-01

    A thermometry system designed to be used in the treatment of cancer by ultrasound hyperthermia is described. The system monitors tumor temperatures using 16 type T (copper-constantan) thermocouples and is controlled by a 12 MHz Intel 8031 microcomputer. An analog circuit board contains the thermocouple amplifiers, an analog multiplexer, scaling circuitry, and an analog to digital converter. A digital board contains the Intel 8031, program memory, data memory, as well as circuitry for control and data communications. Communication with the hyperthermia system control computer is serially by RS-232 with selectable baud rate. Since the thermocouple amplifiers may have slight differences in gain and offset, a calibrated offset is added to a lookup table value to obtain the proper display temperature to within +- 0.1 0 C. The calibration routine, implemented in software, loads a nonvolatile random access memory chip with the proper offset values based on the outputs of each thermocouple channel at known temperatures which bracket a range of interest

  4. Evaluation models and evaluation use

    Science.gov (United States)

    Contandriopoulos, Damien; Brousselle, Astrid

    2012-01-01

    The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator’s role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results. PMID:23526460

  5. Technological improvements to high temperature thermocouples for nuclear reactor applications

    International Nuclear Information System (INIS)

    Schley, R.; Leveque, J.P.

    1980-07-01

    The specific operating conditions of thermocouples in nuclear reactors have provided an incentive for further advances in high temperature thermocouple applications and performance. This work covers the manufacture and improvement of existing alloys, the technology of clad thermocouples, calibration drift during heat treatment, resistance to thermal shock and the compatibility of insulating materials with thermo-electric alloys. The results lead to specifying improved operating conditions for thermocouples in nuclear reactor media (pressurized water, sodium, uranium oxide) [fr

  6. Thermocouples for conditions of aggressive environments

    International Nuclear Information System (INIS)

    Blanc, J.Y.

    1988-01-01

    Two new kinds of thermocouples have been chosen for temperature measurements in the in-pile safety program for light water reactors performed in France. They must give fuel centerline or roc cladding temperatures and withstand steam oxidation between 1000 0 C and 1800 0 C or higher, under severe fuel damage conditions. We describe briefly both types, then we emphasize on improvements under way concerning the tungsten-rhenium legs, the hafnia insulation and the sheaths materials. Oxidation resistance is achieved mainly by silicides layers, but other possibilities are considered, such as iridium coatings. Some details of insulators manufacturing or sensor assembly are given, as well as other high temperature applications for these thermocouples

  7. Core exit thermocouple upgrade at Zion station

    International Nuclear Information System (INIS)

    Ulinski, T.M.; Ferg, D.A.

    1989-01-01

    Following the Three Mile Island accident, the ability of the core exit thermocouple (CET) system to monitor reactor core conditions and core cooling status became a requirement of the U.S. Nuclear Regulatory Commission (NRC). Since the thermocouple system at Zion station was not originally required for postaccident monitoring, Commonwealth Edison Company (CECo) committed to upgrading the CET system and to installing a subcooling margin monitoring (SMM) system. The significance of this commitment was that CECo proposed to accomplish the upgrade effort using internal resources and by developing the required in-house expertise instead of procuring integrated packages from several nuclear steam supply system vendors. The result was that CECo was able to demonstrate a number of new capabilities and unique design features with a significant cost savings. These included a qualified connector with an integral thermocouple cold-reference junction temperature compensation; the design, assembly, testing, and installation of a seismically qualified class 1E microprocessor; a commercial-grade dedication/upgrade process for safety-related hardware; a human factors review capability, and a verification and validation program for safety-related software. A discussion of these new capabilities and details of the design features is presented in this paper

  8. Experience from replacement and check of thermocouples during reconstruction of in-reactor temperature measurements at Bohunice V-1 units 1 and 2

    International Nuclear Information System (INIS)

    Slanina, M.; Stanc, J.

    2001-01-01

    Replacement of thermocouples in the protection tube blocks was a key phase of the reconstruction of in-reactor temperature measurements at Bohunice V-1 with regard to the success, reliability and impact on safety of unit operation. The replacement consisted of reliable and safe withdrawal of 216 old thermocouples, their disposal and installation of new thermocouples into dry channels. In the material presented, this phase of reconstruction is described in details, with focus on the evaluation of replacement quality and check activities carried out at the new installed thermocouples. (Authors)

  9. Analysis of heat transfer from fuel rods with externally attached thermocouples

    International Nuclear Information System (INIS)

    Gill, C.R.; Coddington, P.

    1988-05-01

    This paper describes the development of 2 and 3 dimensional finite element heat conduction models to simulate the behaviour of the external thermocouples attached to the LOFT fuel rods during the blowdown phase of a large break loss-of-coolant accident. To establish the model and determine the thermal coupling between the thermocouple and the fuel rod extensive use was made of two series of experiments performed at INEL in the LOFT Test Support Facility (LTSF). These experiments were high pressure reflood experiments with fluid conditions 'typical' of those seen during the bottom-up flow period of the LOFT experiments. (author)

  10. Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.

    Energy Technology Data Exchange (ETDEWEB)

    Nakos, James T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Suo-Anttila, Jill M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zepper, Ethan T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Koenig, Jerry J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Valdez, Vincent A. [ECI Inc., Albuquerque, NM (United States)

    2017-05-01

    Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of the data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .

  11. Determination of the availability of core exit thermocouples during severe accident situations

    International Nuclear Information System (INIS)

    Edson, J.L.

    1985-04-01

    This report presents the findings and recommendations of the Nuclear Power Plant Instrumentation Evaluation (NPPIE) program concerning signal validation methods to determine the on-line availability of core exit thermocouples during accident situations. Methods of selecting appropriate signal validation techniques are discussed and sources of error identified. This report shows that through the use of these techniques the existence of high-temperature-caused errors may be detected as they occur. Specific recommendations for application of selected signal validation techniques to core exit thermocouples and other measurement systems are made. 23 refs., 22 figs., 3 tabs

  12. Zircaloy sheathed thermocouples for PWR fuel rod temperature measurements

    International Nuclear Information System (INIS)

    Anderson, J.V.; Wesley, R.D.; Wilkins, S.C.

    1979-01-01

    Small diameter zircaloy sheathed thermocouples have been developed by EG and G Idaho, Inc., at the Idaho National Engineering Laboratory. Surface mounted thermocouples were developed to measure the temperature of zircaloy clad fuel rods used in the Thermal Fuels Behavior Program (TFBP), and embedded thermocouples were developed for use by the Loss-of-Fluid Test (LOFT) Program for support tests using zircaloy clad electrically heated nuclear fuel rod simulators. The first objective of this developmental effort was to produce zircaloy sheathed thermocouples to replace titanium sheathed thermocouples and thereby eliminate the long-term corrosion of the titanium-to-zircaloy attachment weld. The second objective was to reduce the sheath diameter to obtain faster thermal response and minimize cladding temperature disturbance due to thermocouple attachment

  13. Error analysis of thermocouple measurements in the Radiant Heat Facility

    International Nuclear Information System (INIS)

    Nakos, J.T.; Strait, B.G.

    1980-12-01

    The measurement most frequently made in the Radiant Heat Facility is temperature, and the transducer which is used almost exclusively is the thermocouple. Other methods, such as resistance thermometers and thermistors, are used but very rarely. Since a majority of the information gathered at Radiant Heat is from thermocouples, a reasonable measure of the quality of the measurements made at the facility is the accuracy of the thermocouple temperature data

  14. Transmutation of Thermocouples in Thermal and Fast Nuclear Reactors

    International Nuclear Information System (INIS)

    Scervini, M.; Rae, C.; Lindley, B.

    2013-06-01

    Thermocouples are the most commonly used sensors for temperature measurement in nuclear reactors. Their role is fundamental for the control of current nuclear reactors and for the development of the nuclear technology needed for the implementation of GEN IV nuclear reactors. When used for in-core measurements thermocouples are strongly affected not only by high temperatures, but also by intense neutron fluxes. As a result of the interaction with neutrons, the thermoelements of the thermocouples undergo transmutation, which produces a time dependent change in composition in the thermoelements and, as a consequence, a time dependent drift in the thermocouple signal. Thermocouple drift can be very significant for in-pile temperature measurements and may render the temperature sensors unreliable after exposure to nuclear radiation for relatively short times compared to the life required for temperature sensors in nuclear applications. In this work, undertaken as part of the European project METROFISSION, the change in composition occurring in irradiated thermocouples has been calculated using the software ORIGEN 2.2. Several thermocouples have been considered, including Nickel based thermocouples (type K and type N), Tungsten based thermocouples (W-5%Re vs W-26%Re and W- 3%Re vs W-25%Re), Platinum based thermocouples (type S and Platinum vs Palladium) and Molybdenum vs Niobium thermocouples. The transmutation induced by both thermal flux and fast flux has been calculated. Thermocouples undergo more pronounced transmutation in thermal fluxes rather than in fast fluxes, as the neutron cross section of an element is higher for thermal energies. Nickel based thermocouples have a minimal change in composition, while Platinum based and Tungsten based thermocouples experience a very significant transmutation. The use of coatings deposited on the sheath of a thermocouple has been considered as a mean to reduce the neutron flux the thermoelements inside the thermocouple sheath

  15. Fabrication and use of zircaloy/tantalum-sheathed cladding thermocouples and molybdenum/rhenium-sheathed fuel centerline thermocouples

    International Nuclear Information System (INIS)

    Wilkins, S.C.; Sepold, L.K.

    1985-01-01

    The thermocouples described in this report are zircaloy/tantalum-sheathed and molybdenum/rhenium alloy-sheathed instruments intended for fuel rod cladding and fuel centerline temperature measurements, respectively. Both types incorporate beryllium oxide insulation and tungsten/rhenium alloy thermoelements. These thermocouples, operated at temperatures of 2000 0 C and above, were developed for use in the internationally sponsored Severe Fuel Damage test series in the Power Burst Facility. The fabrication steps for both thermocouple types are described in detail. A laser-welding attachment technique for the cladding-type thermocouple is presented, and experience with alternate materials for cladding and fuel therocouples is discussed

  16. Thermocouple Rakes for Measuring Boundary Layer Flows Extremely Close to Surface

    Science.gov (United States)

    Hwang, Danny P.; Fralick, Gustave C.; Martin, Lisa C.; Blaha, Charles A.

    2001-01-01

    Of vital interest to aerodynamic researchers is precise knowledge of the flow velocity profile next to the surface. This information is needed for turbulence model development and the calculation of viscous shear force. Though many instruments can determine the flow velocity profile near the surface, none of them can make measurements closer than approximately 0.01 in. from the surface. The thermocouple boundary-layer rake can measure much closer to the surface than conventional instruments can, such as a total pressure boundary layer rake, hot wire, or hot film. By embedding the sensors (thermocouples) in the region where the velocity is equivalent to the velocity ahead of a constant thickness strut, the boundary-layer flow profile can be obtained. The present device fabricated at the NASA Glenn Research Center microsystem clean room has a heater made of platinum and thermocouples made of platinum and gold. Equal numbers of thermocouples are placed both upstream and downstream of the heater, so that the voltage generated by each pair at the same distance from the surface is indicative of the difference in temperature between the upstream and downstream thermocouple locations. This voltage differential is a function of the flow velocity, and like the conventional total pressure rake, it can provide the velocity profile. In order to measure flow extremely close to the surface, the strut is made of fused quartz with extremely low heat conductivity. A large size thermocouple boundary layer rake is shown in the following photo. The latest medium size sensors already provide smooth velocity profiles well into the boundary layer, as close as 0.0025 in. from the surface. This is about 4 times closer to the surface than the previously used total pressure rakes. This device also has the advantage of providing the flow profile of separated flow and also it is possible to measure simultaneous turbulence levels within the boundary layer.

  17. Low Drift Type N Thermocouples for Nuclear Applications

    International Nuclear Information System (INIS)

    Scervini, M.; Rae, C.

    2013-06-01

    Thermocouples are the most commonly used sensors for temperature measurement in nuclear reactors. They are crucial for the control of current nuclear reactors and for the development of GEN IV reactors. In nuclear applications thermocouples are strongly affected by intense neutron fluxes. As a result of the interaction with neutrons, the thermoelements of the thermocouples undergo transmutation, which produces a time dependent change in composition and, as a consequence, a time dependent drift of the thermocouple signal. Thermocouple drift can be very significant for in-pile temperature measurements and may render the temperature sensors unreliable after exposure to nuclear radiation for relatively short times compared to the life required for temperature sensors in nuclear applications. Previous experiences with type K thermocouples in nuclear reactors have shown that they are affected by neutron irradiation only to a limited extent. Similarly type N thermocouples are expected to be only slightly affected by neutron fluxes. Currently the use of Nickel based thermocouples is limited to temperatures lower than 1000 deg. C due to drift related to phenomena other than nuclear irradiation. In this work, undertaken as part of the European project METROFISSION, the drift of type N thermocouples has been investigated in the temperature range 600-1300 deg. C. The approach of this study is based on the attempt to separate the contributions of each thermo-element to drift. In order to identify the dominant thermo-element for drift, the contributions of both positive (NP) and negative (NN) thermo-elements to the total drift of 3.2 mm diameter MIMS thermocouples have been measured in each drift test using a pure Pt thermo-element as a reference. Conventional Inconel-600 sheathed type N thermocouples have been compared with type N thermocouples sheathed in a new alloy. At temperatures higher than 1000 deg. C conventional Inconel600 sheathed type N thermocouples can experience a

  18. A Study of the Behavior Characteristics for K-type Thermocouple

    International Nuclear Information System (INIS)

    Ye, Songhae; Kim, Yongsik; Lee, Sooill; Kim, Sungjin; Lyou, Jooon

    2014-01-01

    K-type thermocouple is widely used in nuclear power plants (NPP) and they provide reliable service. Generally, the thermocouple assembly is the finished product and usually only nondestructive tests are performed on the assembly, whereas destructive tests are confined to selected bulk cable specimens. This K-type thermocouple has been used representatively in the In-Core Instrument Assembly (ICI) in the nuclear power plants. The ICI consists of five rhodium emitter detectors that provide information on the thermal power for the core and one K-type thermocouple made with two cables (Chromel-Alumel) that provides the temperature of core exit (CET). Generally, the quantity of the ICI is absolutely different according to the number of fuel assemblies in the NPP. In the case of SKN 3 and 4, they were designed to the 61 ICI to provide information on the core cooling to the inadequate core cooling monitoring system (ICCMS). This measured temperature could be also used to check the entry condition of severe accidents. The technology of the TFDR is a generic skill to detect the fault position of the cable. In-core Instruments (ICIs) were used to detect the Core Exit Temperature (CET) in a reactor. This measured temperature was also used to check the entry condition of severe accidents. However, if a serious accident occurs, the upper portion of the core is damaged. This instrument has not been available. This paper illustrates the estimation possibility for the status of molten core through the high-temperature characteristics test of k-type thermocouple. It turns out that it is possible to measure the k-type thermocouple up to 1350 .deg. C degrees before melting during insertion into the melting furnace. Additionally, in order to measure a high temperature of 2000 .deg. C or more, the replacement possibility of k-type thermocouple was evaluated. However the tungsten-rhenium thermocouple is impossible to use in the detection of temperature at the in-core because of the

  19. The IIR evaluation model

    DEFF Research Database (Denmark)

    Borlund, Pia

    2003-01-01

    An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation ...

  20. Sputtered type s thermocouples on quartz glass substrates

    International Nuclear Information System (INIS)

    Sopko, B.; Vlk, J.; Chren, D.; Sopko, V.; Dammer, J.; Mengler, J.; Hynek, V.

    2011-01-01

    The work deals with the development of thin film thermocouples and their practical use. The principle of measuring planar thin film thermocouples is the same as for conventional thermocouples and is based on the thermoelectric effect, which named after its discoverer, Seebeck. Seebeck effect is direct conversion of temperature differences to electric voltage. In different applications it is necessary to use temperature sensors with high spatial resolution (with the placement of several measured points on the segment of length 1 mm) and short response time. For this application are currently used planar thermocouples with important advantage in production price and reproducible production. The innovative potential of thin-film thermocouples are to be found mainly in: 1 st use of technology in thin layers, unlike the already mature technologies applied in the production of conventional thermocouple probes are capable of further improvement with the usage of new substrate materials, modified methods for creating electrical contacts to the new thermocouple configuration and adhesive and protective layers, 2 nd in saving precious and rare metals, 3 rd decreasing the thickness of the layers and reducing the overall size of thermo probe. Measuring the temperature of molten steel, leading to a general loss of strength and the subsequent destruction of the probe. Here exhibited the highest resistance of quartz plates used in thin film substrates thermocouples. (authors)

  1. Thermocouple design for measuring temperatures of small insects

    Science.gov (United States)

    A.A. Hanson; R.C. Venette

    2013-01-01

    Contact thermocouples often are used to measure surface body temperature changes of insects during cold exposure. However, small temperature changes of minute insects can be difficult to detect, particularly during the measurement of supercooling points. We developed two thermocouple designs, which use 0.51 mm diameter or 0.127 mm diameter copper-constantan wires, to...

  2. Thermocouple correlation transit time flowmeter tests at WCL

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1976-11-01

    Scoping tests indicate the feasibility for using transit time flowmeters with thermocouple sensors in steam-water steady state flow. Conclusive results were not obtained. More conclusive results are expected from tests to be conducted in the semiscale facility with a redesigned transit time thermocouple sensor

  3. Base metal thermocouples drift rate dependence from thermoelement diameter

    International Nuclear Information System (INIS)

    Pavlasek, P; Duris, S; Palencar, R

    2015-01-01

    Temperature measurements are one of the key factors in many industrial applications that directly affect the quality, effectiveness and safety of manufacturing processes. In many industrial applications these temperature measurements are realized by thermocouples. Accuracy of thermocouples directly affects the quality of the final product of manufacturing and their durability determines the safety margins required. One of the significant effects that affect the precision of the thermocouples is short and long term stability of their voltage output. This stability issue occurs in every type of thermocouples and is caused by multiple factors. In general these factors affect the Seebeck coefficient which is a material constant, which determines the level of generated voltage when exposed to a temperature gradient. Changes of this constant result in the change of the thermocouples voltage output thus indicated temperature which can result in production quality issues, safety and health hazards. These alternations can be caused by physical and chemical changes within the thermocouple lead material. Modification of this material constant can be of temporary nature or permanent. This paper concentrates on the permanent, or irreversible changes of the Seebeck coefficient that occur in commonly used swaged MIMS Type N thermocouples. These permanent changes can be seen as systematic change of the EMF of the thermocouple when it is exposed to a high temperature over a period of time. This change of EMF by time is commonly known as the drift of the thermocouple. This work deals with the time instability of thermocouples EMF at temperatures above 1200 °C. Instability of the output voltage was taken into relation with the lead diameter of the tested thermocouples. This paper concentrates in detail on the change of voltage output of thermocouples of different diameters which were tested at high temperatures for the overall period of more than 210 hours. The gather data from this

  4. Base metal thermocouples drift rate dependence from thermoelement diameter

    Science.gov (United States)

    Pavlasek, P.; Duris, S.; Palencar, R.

    2015-02-01

    Temperature measurements are one of the key factors in many industrial applications that directly affect the quality, effectiveness and safety of manufacturing processes. In many industrial applications these temperature measurements are realized by thermocouples. Accuracy of thermocouples directly affects the quality of the final product of manufacturing and their durability determines the safety margins required. One of the significant effects that affect the precision of the thermocouples is short and long term stability of their voltage output. This stability issue occurs in every type of thermocouples and is caused by multiple factors. In general these factors affect the Seebeck coefficient which is a material constant, which determines the level of generated voltage when exposed to a temperature gradient. Changes of this constant result in the change of the thermocouples voltage output thus indicated temperature which can result in production quality issues, safety and health hazards. These alternations can be caused by physical and chemical changes within the thermocouple lead material. Modification of this material constant can be of temporary nature or permanent. This paper concentrates on the permanent, or irreversible changes of the Seebeck coefficient that occur in commonly used swaged MIMS Type N thermocouples. These permanent changes can be seen as systematic change of the EMF of the thermocouple when it is exposed to a high temperature over a period of time. This change of EMF by time is commonly known as the drift of the thermocouple. This work deals with the time instability of thermocouples EMF at temperatures above 1200 °C. Instability of the output voltage was taken into relation with the lead diameter of the tested thermocouples. This paper concentrates in detail on the change of voltage output of thermocouples of different diameters which were tested at high temperatures for the overall period of more than 210 hours. The gather data from this

  5. Heated junction thermocouple level measurement apparatus

    International Nuclear Information System (INIS)

    Bevilacqua, F.; Burger, J.M.

    1984-01-01

    A liquid level sensing apparatus senses the level of liquid surrounding the apparatus. A plurality of axially spaced sensors are enclosed in a separator tube. The separator tube tends to collapse the level of a two-phase fluid within the separator tube into essentially a liquid phase and a gaseous phase where the collapsed level bears a relationship to the coolant inventory outside the separator tube. The level of the liquid phase is sensed by level sensing apparatus. The separator tube contains inlet-outlet ports near the top and bottom thereof to equalize the liquid level inside and outside the separator tube when the level fluctuates or the water within the separator tube flashes to steam. Each sensor is comprised of a heater, a heated thermocouple junction and an unheated thermocouple junction within an elongated heat conductive housing. The heated portion of housing is enclosed in a splash guard with inlet-outlet ports near the top and bottom to equalize the liquid level inside and outside the splash guardand to eliminate the spurious indications of liquid level change which may arise if water droplets contact the housing in the region of the heater. To prevent steam bubbles entrained in a two-phase fluid cross flow from entering the lateral inlet-outlet ports of the separator tube, the separator tube is enclosed in support tube which may in turn be enclosed in an otherwise unused control element assembly shroud. The lateral inlet-outlet ports of separator tube are axially offset from lateral inlet-outlet ports of support tube at least where support tube is subjected to cross flow. The shroud is open on the bottom and has lateral inlet-outlet ports to facilitate liquid level fluctuations to equalize inside and outside shroud

  6. Study of thermocouples for control of high temperatures

    International Nuclear Information System (INIS)

    Villamayor, M.

    1966-12-01

    Previous works have shown that the tungsten-rhenium alloys thermocouples were a good instrument for control of high temperatures. From its, the author has studied the W/W 26 per cent and W 5 per cent Re/W 26 per cent Re french manufactured thermocouples and intended for control of temperatures in nuclear reactors until 2300 deg. C. In 'out-pile' study he determines the general characteristics of these thermocouples: average calibration curves, thermal shocks influence, response times, and alloys allowing the cold source compensation. The evolution of these thermocouples under thermal neutron flux has been determined by 'in-pile' study. The observations have led the author to propose a new type of thermocouples settled of molybdenum-columbium alloys. (author) [fr

  7. A novel method for in-situ estimation of time constant for core temperature monitoring thermocouples of operating reactors

    International Nuclear Information System (INIS)

    Sylvia, J.I.; Chandar, S. Clement Ravi; Velusamy, K.

    2014-01-01

    Highlights: • Core temperature sensor was mathematically modeled. • Ramp signal generated during reactor operating condition is used. • Procedure and methodology has been demonstrated by applying it to FBTR. • Same technique will be implemented for all fast reactors. - Abstract: Core temperature monitoring system is an important component of reactor protection system in the current generation fast reactors. In this system, multiple thermocouples are housed inside a thermowell of fuel subassemblies. Response time of the thermocouple assembly forms an important input for safety analysis of fast reactor and hence frequent calibration/time constant estimation is essential. In fast reactors the central fuel subassembly is provided with bare fast response thermocouples to detect under cooling events in reactor and take proper safety action. On the other hand, thermocouples in thermowell are mainly used for blockage detection in individual fuel subassemblies. The time constant of thermocouples in thermowell can drift due to creep, vibration and thermal fatigue of the thermowell assembly. A novel method for in-situ estimation of time constant is proposed. This method uses the Safety Control Rod Accelerated Mechanism (SCRAM) or lowering of control Rod (LOR) signals of the reactor along with response of the central subassembly thermocouples as reference data. Validation of the procedure has been demonstrated by applying it to FBTR

  8. Study of thermocouples for control of high temperatures; Etude de thermocouples pour le reperage des hautes temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Villamayor, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Faculte des Sciences de l' Universite de Lyon - 69 (France)

    1967-07-01

    Previous works have shown that the tungsten-rhenium alloys thermocouples were a good instrument for control of high temperatures. From its, the author has studied the W/W 26 per cent and W 5 per cent Re/W 26 per cent Re french manufactured thermocouples and intended for control of temperatures in nuclear reactors until 2300 deg. C. In 'out-pile' study he determines the general characteristics of these thermocouples: average calibration curves, thermal shocks influence, response times, and alloys allowing the cold source compensation. The evolution of these thermocouples under thermal neutron flux has been determined by 'in-pile' study. The observations have led the author to propose a new type of thermocouples settled of molybdenum-columbium alloys. (author) [French] Des travaux anterieurs ont montre que les thermocouples des alliages tungstene-rhenium etaient susceptibles de reperer avec precision des hautes temperatures. A partir de la, l'auteur a etudie las thermocouples W/W 26 pour cent Re et W 5 pour cent Re/W 26 pour cent Re de fabrication francaise et destines au controle des temperatures dans les reacteurs nucleaires, jusqu'a 2300 deg. C Dans l'etude 'hors-pile' il a determine les caracteristiques generales de ces thermocouples: courbes d'etalonnage moyen, influence des chocs thermiques, temps de reponse, et alliages assurant la compensation de soudure froide. L'etude 'en-pile' a permis de rendre compte de l'evolution de ces thermocouples sous flux neutroniques. Les phenomenes observes ont conduit l'auteur a proposer un nouveau type de thermocouples constitues d'alliages molybdene-niobium. (auteur)

  9. Metallic and Ceramic Thin Film Thermocouples for Gas Turbine Engines

    Directory of Open Access Journals (Sweden)

    Otto J. Gregory

    2013-11-01

    Full Text Available Temperatures of hot section components in today’s gas turbine engines reach as high as 1,500 °C, making in situ monitoring of the severe temperature gradients within the engine rather difficult. Therefore, there is a need to develop instrumentation (i.e., thermocouples and strain gauges for these turbine engines that can survive these harsh environments. Refractory metal and ceramic thin film thermocouples are well suited for this task since they have excellent chemical and electrical stability at high temperatures in oxidizing atmospheres, they are compatible with thermal barrier coatings commonly employed in today’s engines, they have greater sensitivity than conventional wire thermocouples, and they are non-invasive to combustion aerodynamics in the engine. Thin film thermocouples based on platinum:palladium and indium oxynitride:indium tin oxynitride as well as their oxide counterparts have been developed for this purpose and have proven to be more stable than conventional type-S and type-K thin film thermocouples. The metallic and ceramic thin film thermocouples described within this paper exhibited remarkable stability and drift rates similar to bulk (wire thermocouples.

  10. The EMEFS model evaluation

    International Nuclear Information System (INIS)

    Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs

  11. The EMEFS model evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Barchet, W.R. (Pacific Northwest Lab., Richland, WA (United States)); Dennis, R.L. (Environmental Protection Agency, Research Triangle Park, NC (United States)); Seilkop, S.K. (Analytical Sciences, Inc., Durham, NC (United States)); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. (Atmospheric Environment Service, Downsview, ON (Canada)); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  12. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  13. Data modeling and evaluation

    International Nuclear Information System (INIS)

    Bauge, E.; Hilaire, S.

    2006-01-01

    This lecture is devoted to the nuclear data evaluation process, during which the current knowledge (experimental or theoretical) of nuclear reactions is condensed and synthesised into a computer file (the evaluated data file) that application codes can process and use for simulation calculations. After an overview of the content of evaluated nuclear data files, we describe the different methods used for evaluating nuclear data. We specifically focus on the model based approach which we use to evaluate data in the continuum region. A few examples, coming from the day to day practice of data evaluation will illustrate this lecture. Finally, we will discuss the most likely perspectives for improvement of the evaluation process in the next decade. (author)

  14. Flow measurements using noise signals of axially displaced thermocouples

    Energy Technology Data Exchange (ETDEWEB)

    Kozma, R.; Hoogenboom, J.E. (Interuniversitair Reactor Inst., Delft (Netherlands))

    1990-01-01

    Determination of the flow rate of the coolant in the cooling channels of nuclear reactors is an important aspect of core monitoring. It is usually impossible to measure the flow by flowmeters in the individual channels due to the lack of space and safety reasons. An alternative method is based on the analysis of noise signals of the available in-core detectors. In such a noise method, a transit time which characterises the propagation of thermohydraulic fluctuations (density or temperature fluctuations) in the coolant is determined from the correlation between the noise signals of axially displaced detectors. In this paper, the results of flow measurements using axially displaced thermocouples in the channel wall will be presented. The experiments have been performed in a simulated MRT-type fuel assembly located in the research reactor HOR of the Interfaculty Reactor Institute, Delft. It was found that the velocities obtained via temperature noise correlation methods are significantly larger than the area-averaged velocity in the single-phase coolant flow. Model calculations show that the observed phenomenon can be explained by effects due to the radial velocity distribution in the channel. (author).

  15. Integrated Assessment Model Evaluation

    Science.gov (United States)

    Smith, S. J.; Clarke, L.; Edmonds, J. A.; Weyant, J. P.

    2012-12-01

    Integrated assessment models of climate change (IAMs) are widely used to provide insights into the dynamics of the coupled human and socio-economic system, including emission mitigation analysis and the generation of future emission scenarios. Similar to the climate modeling community, the integrated assessment community has a two decade history of model inter-comparison, which has served as one of the primary venues for model evaluation and confirmation. While analysis of historical trends in the socio-economic system has long played a key role in diagnostics of future scenarios from IAMs, formal hindcast experiments are just now being contemplated as evaluation exercises. Some initial thoughts on setting up such IAM evaluation experiments are discussed. Socio-economic systems do not follow strict physical laws, which means that evaluation needs to take place in a context, unlike that of physical system models, in which there are few fixed, unchanging relationships. Of course strict validation of even earth system models is not possible (Oreskes etal 2004), a fact borne out by the inability of models to constrain the climate sensitivity. Energy-system models have also been grappling with some of the same questions over the last quarter century. For example, one of "the many questions in the energy field that are waiting for answers in the next 20 years" identified by Hans Landsberg in 1985 was "Will the price of oil resume its upward movement?" Of course we are still asking this question today. While, arguably, even fewer constraints apply to socio-economic systems, numerous historical trends and patterns have been identified, although often only in broad terms, that are used to guide the development of model components, parameter ranges, and scenario assumptions. IAM evaluation exercises are expected to provide useful information for interpreting model results and improving model behavior. A key step is the recognition of model boundaries, that is, what is inside

  16. Training effectiveness evaluation model

    International Nuclear Information System (INIS)

    Penrose, J.B.

    1993-01-01

    NAESCO's Training Effectiveness Evaluation Model (TEEM) integrates existing evaluation procedures with new procedures. The new procedures are designed to measure training impact on organizational productivity. TEEM seeks to enhance organizational productivity through proactive training focused on operation results. These results can be identified and measured by establishing and tracking performance indicators. Relating training to organizational productivity is not easy. TEEM is a team process. It offers strategies to assess more effectively organizational costs and benefits of training. TEEM is one organization's attempt to refine, manage and extend its training evaluation program

  17. Mineral insulated thermocouples - installation in steam generating plant

    International Nuclear Information System (INIS)

    Bridges, W.J.; Brown, J.F.

    1980-01-01

    The main areas of interest considered are Central Station Fossil Fuel fired boilers of around 500 MW capacity, AGR Boilers, and Industrial and Research Development projects. While the requirement for temperature measurement in each of these areas may vary the techniques adopted to overcome installation and protection problems created by thermal, chemical and mechanical hazards remain basically the same. The reasons for temperature measurement are described together with methods of attachment development and procedures for protection of the thermocouple along its route length until its exit from the hazardous environment. These relative accuracies of the different attachments are discussed along with factors influencing the life of the thermocouple. In many instances thermocouple installation is either a once only opportunity and/or an expensive exercise. It is therefore essential to develop and apply an effective quality control system during the installation phase. An effective system is described. Finally, a brief outline of possible future trends is given. (author)

  18. Developmental Education Evaluation Model.

    Science.gov (United States)

    Perry-Miller, Mitzi; And Others

    A developmental education evaluation model designed to be used at a multi-unit urban community college is described. The purpose of the design was to determine the cost effectiveness/worth of programs in order to initiate self-improvement. A needs assessment was conducted by interviewing and taping the responses of students, faculty, staff, and…

  19. CMAQ Model Evaluation Framework

    Science.gov (United States)

    CMAQ is tested to establish the modeling system’s credibility in predicting pollutants such as ozone and particulate matter. Evaluation of CMAQ has been designed to assess the model’s performance for specific time periods and for specific uses.

  20. An Innovative Flow-Measuring Device: Thermocouple Boundary Layer Rake

    Science.gov (United States)

    Hwang, Danny P.; Fralick, Gustave C.; Martin, Lisa C.; Wrbanek, John D.; Blaha, Charles A.

    2001-01-01

    An innovative flow-measuring device, a thermocouple boundary layer rake, was developed. The sensor detects the flow by using a thin-film thermocouple (TC) array to measure the temperature difference across a heater strip. The heater and TC arrays are microfabricated on a constant-thickness quartz strut with low heat conductivity. The device can measure the velocity profile well into the boundary layer, about 65 gm from the surface, which is almost four times closer to the surface than has been possible with the previously used total pressure tube.

  1. Magnetic tunnel junction thermocouple for thermoelectric power harvesting

    Science.gov (United States)

    Böhnert, T.; Paz, E.; Ferreira, R.; Freitas, P. P.

    2018-05-01

    The thermoelectric power generated in magnetic tunnel junctions (MTJs) is determined as a function of the tunnel barrier thickness for a matched electric circuit. This study suggests that lower resistance area product and higher tunnel magnetoresistance will maximize the thermoelectric power output of the MTJ structures. Further, the thermoelectric behavior of a series of two MTJs, a MTJ thermocouple, is investigated as a function of its magnetic configurations. In an alternating magnetic configurations the thermovoltages cancel each other, while the magnetic contribution remains. A large array of MTJ thermocouples could amplify the magnetic thermovoltage signal significantly.

  2. Failure of sheathed thermocouples due to thermal cycling

    International Nuclear Information System (INIS)

    Anderson, R.L.; Ludwig, R.L.

    1982-03-01

    Open circuit failures (up to 100%) in small-diameter thermocouples used in electrically heated nuclear fuel rod simulator prototypes during thermal cycling tests were investigated to determine the cause(s) of the failures. The experiments conducted to determine the relative effects of differential thermal expansion, wire size, grain size, and manufacturing technology are described. It was concluded that the large grain size and embrittlement which result from certain common manufacturing annealing and drawing procedures were a major contributing factor in the breakage of the thermocouple wires

  3. Heat penetration and thermocouple location in home canning.

    Science.gov (United States)

    Etzel, Mark R; Willmore, Paola; Ingham, Barbara H

    2015-01-01

    We processed applesauce, tomato juice, and cranberries in pint jars in a boiling water canner to test thermal processing theories against home canning of high-acid foods. For each product, thermocouples were placed at various heights in the jar. Values for f h (heating), f cl (cooling), and F 82.2°C (lethality) were determined for each thermocouple location, and did not depend substantially on thermocouple location in accordance with heat transfer theory. There was a cold spot in the jar, but the cold spot during heating became the hot spot during cooling. During heating, the geometric center was the last to heat, and remained coldest the longest, but during coooling, it was also the last to cool, and remained hottest the longest. The net effect was that calculated lethality in home canning was not affected by thermocouple location. Most of the lethality during home canning occurred during air cooling, making cooling of home canned foods of great importance. Calculated lethality was far greater than the required 5-log reduction of spores in tomato juice and vegetative cells in cranberries, suggesting a wide margin of safety for approved home-canning processes for high-acid foods.

  4. Realization of Copper Melting Point for Thermocouple Calibrations

    Directory of Open Access Journals (Sweden)

    Y. A. ABDELAZIZ

    2011-08-01

    Full Text Available Although the temperature stability and uncertainty of the freezing plateau is better than that of the melting plateau in most of the thermometry fixed points, but realization of melting plateaus are easier than that of freezing plateaus for metal fixed points. It will be convenient if the melting points can be used instead of the freezing points in calibration of standard noble metal thermocouples because of easier realization and longer plateau duration of melting plateaus. In this work a comparison between the melting and freezing points of copper (Cu was carried out using standard noble metal thermocouples. Platinum - platinum 10 % rhodium (type S, platinum – 30 % rhodium / platinum 6 % rhodium (type B and platinum - palladium (Pt/Pd thermocouples are used in this study. Uncertainty budget analysis of the melting points and freezing points is presented. The experimental results show that it is possible to replace the freezing point with the melting point of copper cell in the calibration of standard noble metal thermocouples in secondary-level laboratories if the optimal methods of realization of melting points are used.

  5. Pragmatic geometric model evaluation

    Science.gov (United States)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  6. The development of a fast response thermocouple for use in liquid metals

    International Nuclear Information System (INIS)

    Morss, A.G.; Vincent, B.

    1987-03-01

    Work carried out at Berkeley Nuclear Laboratories to develop a fast-response thermocouple for use in liquid metals is described. This thermocouple because of its unique construction, has a junction mass approaching zero and hence its frequency response should be very high. Some of the problems of manufacture are discussed, in particular the high quality of seal required to avoid ingress of liquid metal. A comparison of results obtained with the fast-response thermocouple and with conventional stainless-steel-sheathed thermocouples is made. The improved response of the new thermocouple is clearly visible, hence confirming that measurements made with sheathed thermocouples suffer attenuation. It is concluded that results obtained with the fast-response thermocouple are close to the real magnitude of temperature fluctuations present in turbulent flow. It is also demonstrated that, with suitable corrections, results obtained with sheathed thermocouples can be used to estimate the real signals present in the flow. (author)

  7. Zircaloy-sheathed element rods fitted with thermo-couples

    International Nuclear Information System (INIS)

    Bernardy de Sigoyer, B.; Jacques, F.; Thome, P.

    1963-01-01

    In order to carry out thermal conductivity measurements on UO 2 in conditions similar to those under which fuel rods are used, it was necessary to measure the temperature at the interior of a fuel element sheathed in zircaloy. The temperatures are taken with Thermocoax type thermocouples, that is to say fitted with a very thin sheath of stainless steel or Inconel. It is known also that fusion welding of zircaloy onto stainless steel is impossible and that high temperature welded joints are very difficult because of their aggressiveness. The technique used consists in brazing the thermocouples to relatively large stainless steel parts and then joining these plugs by electron bombardment welding to diffused stainless steel-zircaloy couplings. The properties of these diffused couplings and of the brazed joints were studied; the various stages in the fabrication of the containers are also described. (authors) [fr

  8. Classification of Unknown Thermocouple Types Using Similarity Factor Measurement

    Directory of Open Access Journals (Sweden)

    Seshu K. DAMARLA

    2011-01-01

    Full Text Available In contrast to classification using PCA method, a new methodology is proposed for type identification of unknown thermocouple. The new methodology is based on calculating the degree of similarity between two multivariate datasets using two types of similarity factors. One similarity factor is based on principle component analysis and the angles between the principle component subspaces while the other is based on the Mahalanobis distance between the datasets. Datasets containing thermo-emfs against given temperature ranges are formed for each type of thermocouple (e.g. J, K, S, T, R, E, B and N type by experimentation are considered as reference datasets. Datasets corresponding to unknown type are captured. Similarity factor between the datasets one of which being the unknown type and the other being each known type are compared. When maximum similarity factor occurs, then the class of unknown type is allocated to that of known type.

  9. Recent improvements on micro-thermocouple based SThM

    OpenAIRE

    Nguyen, T. P.; Thiery, L.; Teyssieux, D.; Briand, Danick; Vairac, P.

    2017-01-01

    The scanning thermal microscope (SThM) has become a versatile tool for local surface temperature mapping or measuring thermal properties of solid materials. In this article, we present recent improvements in a SThM system, based on a micro-wire thermocouple probe associated with a quartz tuning fork for contact strength detection. Some results obtained on an electrothermal micro-hotplate device, operated in active and passive modes, allow demonstrating its performance as a coupled force detec...

  10. Thermocouple placement and hot spots in radioactive waste tanks

    International Nuclear Information System (INIS)

    Barker, J.J.

    1991-06-01

    Analytical solutions available in Carslaw and Jaeger's Conduction of Heat in Solids for continuous point sources and for continuous finite sources are used to demonstrate that placement of thermocouples on a fine enough grid to detect a hot spot is impracticable for existing waste tanks but fortunately not necessary. Graphs covering ranges of diffusivities, times, temperatures and heat generation rates are included. 2 refs., 8 figs., 5 tabs

  11. Recommendations for the specification of thermocouples for nuclear applications

    International Nuclear Information System (INIS)

    1977-05-01

    This Code of Practice is a guide for use in the preparation of individual specifications to cover, as fully as possible the conditions governing the supply of raw materials and the ordering, manufacture, testing, inspection, handling and installation of thermocouples for use in nuclear environments in order that reliable, consistent and generally acceptable results can be obtained. The insulation resistance values quoted in this document apply to magnesium oxide. If other insulants are called for, appropriate values must be specified. (author)

  12. Effects of thermocouple installation and location on fuel rod temperature measurements

    International Nuclear Information System (INIS)

    McCormick, R.D.

    1983-01-01

    This paper describes the results of analyses of nuclear fuel rod cladding temperature data obtained during in-reactor experiments under steady state and transient (simulated loss-of-coolant accident) operating conditions. The objective of the analyses was to determine the effect of thermocouple attachment method and location on measured thermal response. The use of external thermocouples increased the time to critical heat flux (CHF), reduced the blowdown peak temperature, and enhanced rod quench. A comparison of laser welded and resistance welded external thermocouple responses showed that the laser welding technique reduced the indicated cladding steady state temperatures and provided shorter time-to-CHF. A comparison of internal welded and embedded thermocouples indicated that the welded technique gave generally unsatisfactory cladding temperature measurements. The embedded thermocouple gave good, consistent results, but was possibly more fragile than the welded thermocouples. Detailed descriptions of the thermocouple designs, attachment methods and locations, and test conditions are provided

  13. Calibration Technique of the Irradiated Thermocouple using Artificial Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jin Tae; Joung, Chang Young; Ahn, Sung Ho; Yang, Tae Ho; Heo, Sung Ho; Jang, Seo Yoon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    To correct the signals, the degradation rate of sensors needs to be analyzed, and re-calibration of sensors should be followed periodically. In particular, because thermocouples instrumented in the nuclear fuel rod are degraded owing to the high neutron fluence generated from the nuclear fuel, the periodic re-calibration process is necessary. However, despite the re-calibration of the thermocouple, the measurement error will be increased until next re-calibration. In this study, based on the periodically calibrated temperature - voltage data, an interpolation technique using the artificial neural network will be introduced to minimize the calibration error of the C-type thermocouple under the irradiation test. The test result shows that the calculated voltages derived from the interpolation function have good agreement with the experimental sampling data, and they also accurately interpolate the voltages at arbitrary temperature and neutron fluence. That is, once the reference data is obtained by experiments, it is possible to accurately calibrate the voltage signal at a certain neutron fluence and temperature using an artificial neural network.

  14. Zircaloy-sheathed element rods fitted with thermo-couples; Barre combustible a thermocouple gainee de zircaloy

    Energy Technology Data Exchange (ETDEWEB)

    Bernardy de Sigoyer, B; Jacques, F; Thome, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    In order to carry out thermal conductivity measurements on UO{sub 2} in conditions similar to those under which fuel rods are used, it was necessary to measure the temperature at the interior of a fuel element sheathed in zircaloy. The temperatures are taken with Thermocoax type thermocouples, that is to say fitted with a very thin sheath of stainless steel or Inconel. It is known also that fusion welding of zircaloy onto stainless steel is impossible and that high temperature welded joints are very difficult because of their aggressiveness. The technique used consists in brazing the thermocouples to relatively large stainless steel parts and then joining these plugs by electron bombardment welding to diffused stainless steel-zircaloy couplings. The properties of these diffused couplings and of the brazed joints were studied; the various stages in the fabrication of the containers are also described. (authors) [French] Pour des mesures de conductivite thermique de l'UO{sub 2} dans des conditions voisines du fonctionnement des barres combustibles, il s'agissait de mesurer la temperature a l'interieur d'un element combustible gaine de zircaloy. Les prises de temperature sont faites par thermocouples du type Thermocoax, c'est-a-dire pourvu d'une gaine tres mince en inox ou inconel. Par ailleurs on sait que le soudage par fusion du zircaloy sur l'inox est impossible et que les brasures a haute temperature sont difficiles car tres agressives. La technique utilisee consiste a braser les thermocouples sur des pieces en inox relativement massives et de rapporter par soudage au bombardement electronique ces bouchons sur des raccords diffuses zircaloy-inox. Les proprietes de ces raccords diffuses et celles de joints brases ont ete etudiees; on expose egalement les diverses etapes de fabrication des containers. (auteurs)

  15. Evaluating topic models with stability

    CSIR Research Space (South Africa)

    De Waal, A

    2008-11-01

    Full Text Available Topic models are unsupervised techniques that extract likely topics from text corpora, by creating probabilistic word-topic and topic-document associations. Evaluation of topic models is a challenge because (a) topic models are often employed...

  16. A comprehensive survey of thermoelectric homogeneity of commonly used thermocouple types

    Science.gov (United States)

    Machin, Jonathan; Tucker, Declan; Pearce, Jonathan V.

    2018-06-01

    Thermocouples are widely used as temperature sensors in industry. The electromotive force generated by a thermocouple is produced in a temperature gradient and not at the thermocouple tip. This means that the thermoelectric inhomogeneity represents one of the most important contributions to the overall measurement uncertainty associated with thermocouples. To characterise this effect, and to provide some general recommendations concerning the magnitude of this contribution to use when formulating uncertainty analyses, a comprehensive literature survey has been performed. Significant information was found for Types K, N, R, S, B, Pt/Pd, Au/Pt and various other Pt/Rh thermocouples. In the case of Type K and N thermocouples, the survey has been augmented by a substantial amount of data based on calibrations of mineral-insulated, metal-sheathed thermocouple cable reels from thermocouple manufacturers. Some general conclusions are drawn and outline recommendations given concerning typical values for the uncertainty arising from thermoelectric inhomogeneity for the most widely used thermocouple types in the as-new state. It is stressed that these recommendations should only be heeded when individual homogeneity measurements are not possible. It is also stressed that the homogeneity can deteriorate rapidly during use, particularly for base metal thermocouples.

  17. Characterization of a Method for Inverse Heat Conduction Using Real and Simulated Thermocouple Data

    Science.gov (United States)

    Pizzo, Michelle E.; Glass, David E.

    2017-01-01

    It is often impractical to instrument the external surface of high-speed vehicles due to the aerothermodynamic heating. Temperatures can instead be measured internal to the structure using embedded thermocouples, and direct and inverse methods can then be used to estimate temperature and heat flux on the external surface. Two thermocouples embedded at different depths are required to solve direct and inverse problems, and filtering schemes are used to reduce noise in the measured data. Accuracy in the estimated surface temperature and heat flux is dependent on several factors. Factors include the thermocouple location through the thickness of a material, the sensitivity of the surface solution to the error in the specified location of the embedded thermocouples, and the sensitivity to the error in thermocouple data. The effect of these factors on solution accuracy is studied using the methodology discussed in the work of Pizzo, et. al.1 A numerical study is performed to determine if there is an optimal depth at which to embed one thermocouple through the thickness of a material assuming that a second thermocouple is installed on the back face. Solution accuracy will be discussed for a range of embedded thermocouple depths. Moreover, the sensitivity of the surface solution to (a) the error in the specified location of the embedded thermocouple and to (b) the error in the thermocouple data are quantified using numerical simulation, and the results are discussed.

  18. Evaluation of the autoregression time-series model for analysis of a noisy signal

    International Nuclear Information System (INIS)

    Allen, J.W.

    1977-01-01

    The autoregression (AR) time-series model of a continuous noisy signal was statistically evaluated to determine quantitatively the uncertainties of the model order, the model parameters, and the model's power spectral density (PSD). The result of such a statistical evaluation enables an experimenter to decide whether an AR model can adequately represent a continuous noisy signal and be consistent with the signal's frequency spectrum, and whether it can be used for on-line monitoring. Although evaluations of other types of signals have been reported in the literature, no direct reference has been found to AR model's uncertainties for continuous noisy signals; yet the evaluation is necessary to decide the usefulness of AR models of typical reactor signals (e.g., neutron detector output or thermocouple output) and the potential of AR models for on-line monitoring applications. AR and other time-series models for noisy data representation are being investigated by others since such models require fewer parameters than the traditional PSD model. For this study, the AR model was selected for its simplicity and conduciveness to uncertainty analysis, and controlled laboratory bench signals were used for continuous noisy data. (author)

  19. Stability Studies of a New Design Au/Pt Thermocouple Without a Strain Relieving Coil

    Science.gov (United States)

    Jahan, Ferdouse; Ballico, Mark

    2007-12-01

    The performance of a simple, new design Au/Pt thermocouple developed by NMIA is assessed. This thermocouple is proposed as a more accurate replacement, over the temperature range from 0 to 1,000°C, for the commonly used Type R and S industrial transfer standards, in a robust form familiar to industrial calibration laboratories. Due to the significantly different thermal expansions of the Au and Pt thermoelements, reported designs of the Au/Pt thermocouple incorporate a strain-relieving coil or bridge at the thermocouple junction. As the strain relieving coil is mechanically delicate, these thermocouples are usually mounted in a protective quartz tube assembly, like a standard platinum resistance thermometer (SPRT). Although providing uncertainties at the mK level, they are more delicate than the commonly used Type R and S thermocouples. A new and simple design of the Au/Pt thermocouple was developed in which the differential thermal expansion between Au and Pt is accommodated in the thermocouple leads, facilitated by a special head design. The resulting thermocouple has the appearance and robustness of the traditional Type R and S thermocouples, while retaining stability better than 10 mK up to 961°C. Three thermocouples of this design were calibrated at fixed points and by comparison to SPRTs in a stirred salt bath. In order to assess possible impurity migration, strain effects, and mechanical robustness, sequences of heat treatment up to a total of 500 h together with over 50 thermal cycles from 900°C to ambient were performed. The effect of these treatments on the calibration was assessed, demonstrating the sensors to be robust and stable to better than 10 mK. The effects on the measured inhomogeneity of the thermocouple were assessed using the NMIA thermocouple scanning bath.

  20. Summary of thermocouple performance during advanced gas reactor fuel irradiation experiments in the advanced test reactor and out-of-pile thermocouple testing in support of such experiments

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, A. J.; Haggard, DC; Herter, J. W.; Swank, W. D.; Knudson, D. L.; Cherry, R. S. [Idaho National Laboratory, P.O. Box 1625, MS 4112, Idaho Falls, ID, (United States); Scervini, M. [University of Cambridge, Department of Material Science and Metallurgy, 27 Charles Babbage Road, CB3 0FS, Cambridge, (United Kingdom)

    2015-07-01

    High temperature gas reactor experiments create unique challenges for thermocouple-based temperature measurements. As a result of the interaction with neutrons, the thermoelements of the thermocouples undergo transmutation, which produces a time-dependent change in composition and, as a consequence, a time-dependent drift of the thermocouple signal. This drift is particularly severe for high temperature platinum-rhodium thermocouples (Types S, R, and B) and tungsten-rhenium thermocouples (Type C). For lower temperature applications, previous experiences with Type K thermocouples in nuclear reactors have shown that they are affected by neutron irradiation only to a limited extent. Similarly, Type N thermocouples are expected to be only slightly affected by neutron fluence. Currently, the use of these nickel-based thermocouples is limited when the temperature exceeds 1000 deg. C due to drift related to phenomena other than nuclear irradiation. High rates of open-circuit failure are also typical. Over the past 10 years, three long-term Advanced Gas Reactor experiments have been conducted with measured temperatures ranging from 700 deg. C - 1200 deg. C. A variety of standard Type N and specialty thermocouple designs have been used in these experiments with mixed results. A brief summary of thermocouple performance in these experiments is provided. Most recently, out-of-pile testing has been conducted on a variety of Type N thermocouple designs at the following (nominal) temperatures and durations: 1150 deg. C and 1200 deg. C for 2,000 hours at each temperature, followed by 200 hours at 1250 deg. C and 200 hours at 1300 deg. C. The standard Type N design utilizes high purity, crushed MgO insulation and an Inconel 600 sheath. Several variations on the standard Type N design were tested, including a Haynes 214 alloy sheath, spinel (MgAl{sub 2}O{sub 4}) insulation instead of MgO, a customized sheath developed at the University of Cambridge, and finally a loose assembly

  1. Summary of Thermocouple Performance During Advanced Gas Reactor Fuel Irradiation Experiments in the Advanced Test Reactor and Out-of-Pile Thermocouple Testing in Support of Such Experiments

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Palmer; DC Haggard; J. W. Herter; M. Scervini; W. D. Swank; D. L. Knudson; R. S. Cherry

    2011-07-01

    High temperature gas reactor experiments create unique challenges for thermocouple based temperature measurements. As a result of the interaction with neutrons, the thermoelements of the thermocouples undergo transmutation, which produces a time dependent change in composition and, as a consequence, a time dependent drift of the thermocouple signal. This drift is particularly severe for high temperature platinum-rhodium thermocouples (Types S, R, and B); and tungsten-rhenium thermocouples (Types C and W). For lower temperature applications, previous experiences with type K thermocouples in nuclear reactors have shown that they are affected by neutron irradiation only to a limited extent. Similarly type N thermocouples are expected to be only slightly affected by neutron fluxes. Currently the use of these Nickel based thermocouples is limited when the temperature exceeds 1000°C due to drift related to phenomena other than nuclear irradiation. High rates of open-circuit failure are also typical. Over the past ten years, three long-term Advanced Gas Reactor (AGR) experiments have been conducted with measured temperatures ranging from 700oC – 1200oC. A variety of standard Type N and specialty thermocouple designs have been used in these experiments with mixed results. A brief summary of thermocouple performance in these experiments is provided. Most recently, out of pile testing has been conducted on a variety of Type N thermocouple designs at the following (nominal) temperatures and durations: 1150oC and 1200oC for 2000 hours at each temperature, followed by 200 hours at 1250oC, and 200 hours at 1300oC. The standard Type N design utilizes high purity crushed MgO insulation and an Inconel 600 sheath. Several variations on the standard Type N design were tested, including Haynes 214 alloy sheath, spinel (MgAl2O4) insulation instead of MgO, a customized sheath developed at the University of Cambridge, and finally a loose assembly thermocouple with hard fired alumina

  2. The disposition of can thermocouples in a nuclear reactor

    International Nuclear Information System (INIS)

    Wilkie, D.

    1978-01-01

    A philosophy is presented for deciding the distribution of can thermocouples within channels and of instrumented channels throughout the core of a reactor with cluster-type fuel elements when only a few thermocouples can be located in any one channel. The arrangement is made according to a 'factorial' design in which all fuel element positions of interest are covered in a group of channels. Two types of factorial design can be applied: the unconfounded design by which the thermocouples in each channel are chosen at random from the possible positions available, with the results that the temperatures have attached to them an uncertainty determined by the differences among channels; and the confounded design by which the positions are chosen so as to give temperatures whose uncertainty is determined only by the random variations within channels. It is also necessary to estimate standard deviations in order to predict the number of cans likely to reach a given temperature. The standard deviation can be expected to vary with channel position, and since there will also be systematic variations in temperature with channel position it is necessary to arrange channels into groups having similar mean fluxes and flux distributions. Each group is instrumented according to the pattern of a confounded design. The information that such an arrangement provides is an estimate of the systematic temperature variations within channels, estimates of within-channel variation of can temperature, of between-channel variation of can temperature, and of the variation of these quantities among groups of channels grouped according to similarity of mean flux and flux profile. (author)

  3. Recent improvements on micro-thermocouple based SThM

    Science.gov (United States)

    Nguyen, TP; Thiery, L.; Teyssieux, D.; Briand, D.; Vairac, P.

    2017-01-01

    The scanning thermal microscope (SThM) has become a versatile tool for local surface temperature mapping or measuring thermal properties of solid materials. In this article, we present recent improvements in a SThM system, based on a micro-wire thermocouple probe associated with a quartz tuning fork for contact strength detection. Some results obtained on an electrothermal micro-hotplate device, operated in active and passive modes, allow demonstrating its performance as a coupled force detection and thermal measurement system.

  4. Temperature Control System for Chromel-Alumel Thermocouple

    International Nuclear Information System (INIS)

    Piping Supriatna; Nurhanan; Riswan DJ; Heru K, B.; Edi Karyanta

    2003-01-01

    Nuclear Power Plan Operation Safety needs serious handling on temperature measurement and control. In this report has been done manufacturing Temperature Control System for Chromel-Alumel Thermocouple, accordance to material, equipment and human resource ability in the laboratory. Basic component for the Temperature Control System is LM-741 type of Operation Amplifier, which is functionalized as summer for voltage comparator. Function test for this Control System shown its ability for damping on temperature reference. The Temperature Control System will be implemented on PCB Processing Machine. (author)

  5. Calibration of the Dodewaard downcomer thermocouple cross-correlation flow-rate measurements

    Energy Technology Data Exchange (ETDEWEB)

    Stekelenburg, A J.C. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.; Hagen, T.H.J.J. van der [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.; Akker, H.E.A. van den [Technische Univ. Delft (Netherlands). Lab. voor Fysische Technologie

    1992-12-01

    The cross-correlation flow measurement technique, applied for measuring the coolant flow rate in a nuclear reactor, was calibrated with the use of numerical simulations of turbulent flow. The three-dimensional domain was collapsed into two dimensions. With a two-dimensional calculation of steady-state flow with transient thermal characteristics the response of thermocouples to a temperature variation was calculated. By cross-correlating the calculated thermocouple responses, the link between total flow rate and measured transit times was made. Three calibration points were taken in the range of 579 kg/s to 1477 kg/s. In this range, the product of the calculated transit time and the mass flow-rate is constant up to +3.5% and -2.4%. The reliability of the calibration was estimated at {+-}4.6%. The influence of the inlet boundary conditions, and the modelling of the flow in the upper part of the downcomer channel on the calibration result is shown to be small. A measured velocity profile effect was successfully predicted. (orig.).

  6. Nuclear models relevant to evaluation

    International Nuclear Information System (INIS)

    Arthur, E.D.; Chadwick, M.B.; Hale, G.M.; Young, P.G.

    1991-01-01

    The widespread use of nuclear models continues in the creation of data evaluations. The reasons include extension of data evaluations to higher energies, creation of data libraries for isotopic components of natural materials, and production of evaluations for radiative target species. In these cases, experimental data are often sparse or nonexistent. As this trend continues, the nuclear models employed in evaluation work move towards more microscopically-based theoretical methods, prompted in part by the availability of increasingly powerful computational resources. Advances in nuclear models applicable to evaluation will be reviewed. These include advances in optical model theory, microscopic and phenomenological state and level density theory, unified models that consistently describe both equilibrium and nonequilibrium reaction mechanism, and improved methodologies for calculation of prompt radiation from fission. 84 refs., 8 figs

  7. A preliminary study of oxidation-resistant coatings on refractory-metal thermocouple sheaths

    International Nuclear Information System (INIS)

    Wilkins, S.C.

    1985-01-01

    The need to make reliable temperature measurements up to 2200 0 C or higher in steam environments during in-pile nuclear fuel damage tests led to a search for oxidation-resistant coatings for the refractory-metal sheaths used to enclose and protect thermocouples used for such measurements. Iridium, thoria, and thoria-over-iridium coatings were separately sputter-deposited on molybdenum-rhenium alloy protection tubes for evaluation. The coated samples were individually heated in flowing steam in an induction furnace. An extension tube welded to each sample was connected to a vacuum pump and gauge; failure of the sample was detected by noting the degradation of the vacuum maintained in the sample. Relatively heavy coatings of iridium provided a modest degree of oxidation protection at the temperatures of interest. Thoria coatings provided no significant protection at those temperatures, compared to uncoated control samples

  8. 78 FR 56174 - In-Core Thermocouples at Different Elevations and Radial Positions in Reactor Core

    Science.gov (United States)

    2013-09-12

    ... 52 [Docket No. PRM-50-105; NRC-2012-0056] In-Core Thermocouples at Different Elevations and Radial Positions in Reactor Core AGENCY: Nuclear Regulatory Commission. ACTION: Petition for rulemaking; denial...-core thermocouples at different elevations and radial positions throughout the reactor core to enable...

  9. 77 FR 30435 - In-core Thermocouples at Different Elevations and Radial Positions in Reactor Core

    Science.gov (United States)

    2012-05-23

    ... NUCLEAR REGULATORY COMMISSION 10 CFR Part 50 [Docket No. PRM-50-105; NRC-2012-0056] In-core Thermocouples at Different Elevations and Radial Positions in Reactor Core AGENCY: Nuclear Regulatory Commission... of operating licenses for nuclear power plants (``NPP'') to operate NPPs with in-core thermocouples...

  10. Detection of thermocouple malfunction in the Beacon system

    International Nuclear Information System (INIS)

    Morita, T.; Heibel, M.D.; Congedo, T.V.

    1992-01-01

    The BEACON system uses Core Exit Thermocouples (T/C) extensively for continuous radial power distribution monitoring. The T/C's are used to adjust the reference power distribution generated by the BEACON system to match the current radial power distribution. T/C reliability, repeatability, and relative accuracy have been very satisfactory. However, it is very important to detect any T/C malfunctions during operation, since a T/C signal change caused by an undetected malfunction can lead to serious errors in the radial power distribution developed by BEACON. A simple procedure has been developed which is capable of discriminating between changes in T/C signals caused by actual changes in reactor conditions and signal changes caused by T/C malfunctions

  11. Design and research of seal structure for thermocouple column assembly

    International Nuclear Information System (INIS)

    Rao Qiqi; Li Na; Zhao Wei; Ma Zhigang

    2015-01-01

    The new seal structure was designed to satisfy the function of thermocouple column assembly and the reactor structure. This seal structure uses the packing graphite ring and adopts the self-sealing principle. Cone angle is brought to the seal face of seal structure which is conveniently to assembly and disassembly. After the sealing principle analysis and stress calculation of graphite ring which adopt the cone angle, the cone angle increases the radial force of seal structure and improves the seal effect. The stress analysis result shows the seal structure strength satisfies the regulation requirement. The cold and hot function test results shows the sealing effect is good, and the design requirement is satisfied. (authors)

  12. LOFT ECC Pitot Tube and Thermocouple Rake Penetration thermal analysis

    International Nuclear Information System (INIS)

    Tolan, B.J.

    1977-01-01

    A thermal analysis of the LOFT ECC Pitot Tube and Thermocouple Rake Penetration was performed using COUPLE, a two-dimensional finite element computer code. Four transients which conservatively cover all transients the rake will be exposed to were included in this analysis in order to comply with the ASME Code Section III requirements. The transients conservatively cover hot and cold leg operation, and nuclear and nonnuclear operation. The four transients include the LOCE with ECC injection transient, the single control rod drop transient, the scram transient, and the heatup with 0 to 100% load change transient. Temperature distributions in the rake were obtained for each of the four transients and several plots of node temperatures vs. time are given

  13. Lifetime improvement of sheathed thermocouples for use in high-temperature and thermal transient operations

    International Nuclear Information System (INIS)

    McCulloch, R.W.; Clift, J.H.

    1982-01-01

    Premature failure of small-diameter, magnesium-oxide-insulated sheathed thermocouples occurred when they were placed within nuclear fuel rod simulators (FRSs) to measure high temperatures and to follow severe thermal transients encountered during simulation of nuclear reactor accidents in Oak Ridge National Laboratory (ORNL) thermal-hydraulic test facilities. Investigation of thermally cycled thermocouples yielded three criteria for improvement of thermocouple lifetime: (1) reduction of oxygen impurities prior to and during their fabrication, (2) refinement of thermoelement grain size during their fabrication, and (3) elimination of prestrain prior to use above their recrystallization temperature. The first and third criteria were satisfied by improved techniques of thermocouple assembly and by a recovery anneal prior to thermocouple use

  14. Blind system identification of two-thermocouple sensor based on cross-relation method

    Science.gov (United States)

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  15. An experimental study of the effect of external thermocouples on rewetting during reflood

    International Nuclear Information System (INIS)

    Shires, G.L.; Butcher, A.A.; Carpenter, B.G.; McCune, D.S.; Pearson, K.G.

    1980-04-01

    The validation of computer codes used for PWR safety assessment often depends upon experiments carried out with either real fuel pins or electrically heated fuel pin simulators. In some cases, and this applies particularly to in-pile tests, temperatures are measured by means of sheathed thermocouples attached externally to the pins and this raises the question of the possible effect of such thermocouples on the two phase hydraulics and heat transfer which are being studied. This paper describes the experiments which subjected two realistic fuel pin simulators, one with and one without external thermocouples, to identical bottom flooding conditions. They demonstrate very clearly that external thermocouples act as preferential rewetting sites and thereby increase the rate of propagation of the quench front. In the view of the authors of this paper the facts described raise serious doubts about the validity of rewetting data obtained from experiments employing external thermocouples. (U.K.)

  16. A Modified Design of a Thermocouple Based Digital Temperature Indicator with Opto-Isolation

    Directory of Open Access Journals (Sweden)

    S. C. BERA

    2008-01-01

    Full Text Available In the conventional thermocouple based digital temperature indicator the millivolt signal obtained from a thermocouple is first amplified and then converted into a digital signal by using analog-to-digital converter (ADC. This digital signal is then indicated as digital display of temperature using digital counter circuit or microprocessor/microcontroller based circuitry. In the present paper a modified AD conversion technique along with opto-isolation is used to indicate digitally the temperature without using any conventional analog-to-digital converter. The theory and design of the measuring technique are described in the paper. The non-linearity of thermocouple is eliminated by using look-up table within software program. The performance of the circuit has been experimentally tested by using mV input signal instead of a thermocouple as well as using a K-type thermocouple. The experimental results are reported in the paper.

  17. Rock mechanics models evaluation report

    International Nuclear Information System (INIS)

    1987-08-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs

  18. Specific features of thermocouple calorimeter application for measurements of pulsed X-ray emission from plasma

    International Nuclear Information System (INIS)

    Gavrilov, V. V.; Fasakhov, I. K.

    2012-01-01

    It is shown that the accuracy of time-integrated measurements of pulsed X-ray emission from hot plasma with calibrated thermocouple calorimeters is mainly determined by two factors. The first and the most important factor is heating of the filter by the absorbed X-rays; as a result, the calorimeter measures the thermal radiation of the filter, which causes appreciable distortion of the temporal profile and amplitude of the recorded signal. The second factor is the dependence of the effective depth of X-ray absorption in the dielectric that covers the entrance window of the calorimeter on the energy of X-ray photons, i.e., on the recorded radiation spectrum. The results of model calculations of the calorimeter signal are compared with the experimental data.

  19. The EU model evaluation group

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1999-01-01

    The model evaluation group (MEG) was launched in 1992 growing out of the Major Technological Hazards Programme with EU/DG XII. The goal of MEG was to improve the culture in which models were developed, particularly by encouraging voluntary model evaluation procedures based on a formalised and consensus protocol. The evaluation intended to assess the fitness-for-purpose of the models being used as a measure of the quality. The approach adopted was focused on developing a generic model evaluation protocol and subsequent targeting this onto specific areas of application. Five such developments have been initiated, on heavy gas dispersion, liquid pool fires, gas explosions, human factors and momentum fires. The quality of models is an important element when complying with the 'Seveso Directive' requiring that the safety reports submitted to the authorities comprise an assessment of the extent and severity of the consequences of identified major accidents. Further, the quality of models become important in the land use planning process, where the proximity of industrial sites to vulnerable areas may be critical. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  20. Temperature measurement error due to the effects of time varying magnetic fields on thermocouples with ferromagnetic thermoelements

    International Nuclear Information System (INIS)

    McDonald, D.W.

    1977-01-01

    Thermocouples with ferromagnetic thermoelements (iron, Alumel, Nisil) are used extensively in industry. We have observed the generation of voltage spikes within ferromagnetic wires when the wires are placed in an alternating magnetic field. This effect has implications for thermocouple thermometry, where it was first observed. For example, the voltage generated by this phenomenon will contaminate the thermocouple thermal emf, resulting in temperature measurement error

  1. Mobility Models for Systems Evaluation

    Science.gov (United States)

    Musolesi, Mirco; Mascolo, Cecilia

    Mobility models are used to simulate and evaluate the performance of mobile wireless systems and the algorithms and protocols at the basis of them. The definition of realistic mobility models is one of the most critical and, at the same time, difficult aspects of the simulation of applications and systems designed for mobile environments. There are essentially two possible types of mobility patterns that can be used to evaluate mobile network protocols and algorithms by means of simulations: traces and synthetic models [130]. Traces are obtained by means of measurements of deployed systems and usually consist of logs of connectivity or location information, whereas synthetic models are mathematical models, such as sets of equations, which try to capture the movement of the devices.

  2. Thin film platinum–palladium thermocouples for gas turbine engine applications

    Energy Technology Data Exchange (ETDEWEB)

    Tougas, Ian M.; Gregory, Otto J., E-mail: gregory@egr.uri.edu

    2013-07-31

    Thin film platinum:palladium thermocouples were fabricated on alumina and mullite surfaces using radio frequency sputtering and characterized after high temperature exposure to oxidizing environments. The thermoelectric output, hysteresis, and drift of these sensors were measured at temperatures up to 1100 °C. Auger electron spectroscopy was used to follow the extent of oxidation in each thermocouple leg and interdiffusion at the metallurgical junction. Minimal oxidation of the platinum and palladium thermoelements was observed after high temperature exposure, but considerable dewetting and faceting of the films were observed in scanning electron microscopy. An Arrhenius temperature dependence on the drift rate was observed and later attributed to microstructural changes during thermal cycling. The thin film thermocouples, however, did exhibit excellent stability at 1000 °C with drift rates comparable to commercial type-K wire thermocouples. Based on these results, platinum:palladium thin film thermocouples have considerable potential for use in the hot sections of gas turbine engines. - Highlights: • Stable thin film platinum:palladium thermocouples for gas turbine engines • Little oxidation but significant microstructural changes from thermal cycling • Minimal hysteresis during repeated thermal cycling • Drift comparable to commercial wire thermocouples.

  3. Thermoelectric properties of currently available Au/Pt thermocouples related to the valid reference function

    Directory of Open Access Journals (Sweden)

    Edler F.

    2015-01-01

    Full Text Available Au/Pt thermocouples are considered to be an alternative to High Temperature Standard Platinum Resistance Thermometers (HTSPRTs for realizing temperatures according to the International Temperature Scale of 1990 (ITS-90 in the temperature range between aluminium (660.323 °C and silver (961.78 °C. The original aim of this work was to develop and to validate a new reference function for Au/Pt thermocouples which reflects the properties of presently commercially available Au and Pt wires. The thermoelectric properties of 16 Au/Pt thermocouples constructed at different National Metrological Institutes by using wires from different suppliers and 4 commercially available Au/Pt thermocouples were investigated. Most of them exhibit significant deviations from the current reference function of Au/Pt thermocouples caused by the poor performance of the Au-wires available. Thermoelectric homogeneity was investigated by measuring immersion profiles during freezes at the freezing point of silver and in liquid baths. The thermoelectric inhomogeneities were found to be one order of magnitude larger than those of Au/Pt thermocouples of the Standard Reference Material® (SRM® 1749. The improvement of the annealing procedure of the gold wires is a key process to achieve thermoelectric homogeneities in the order of only about (2–3 mK, sufficient to replace the impracticable HTSPRTs as interpolation instruments of the ITS-90. Comparison measurements of some of the Au/Pt thermocouples against a HTSPRT and an absolutely calibrated radiation thermometer were performed and exhibit agreements within the expanded measurement uncertainties. It has been found that the current reference function of Au/Pt thermocouples reflects adequately the thermoelectric properties of currently available Au/Pt thermocouples.

  4. Studies of Behavior Melting Temperature Characteristics for Multi Thermocouple In-Core Instrument Assembly

    International Nuclear Information System (INIS)

    Shin, Donghyup; Chae, Myoungeun; Kim, Sungjin; Lee, Kyulim

    2015-01-01

    Bottom-up type in-core instruments (ICIs) are used for the pressurized water reactors of OPR-1000, APR- 1400 in order to measure neutron flux and temperature in the reactor. It is a well-known technique and a proven design using years in the nuclear field. ICI consists of one pair of K-type thermocouple, five self-powered neutron detectors (SPNDs) and one back ground detector. K-type thermocouple's purpose is to measure the core exit temperature (CET) in the reactor. The CET is a very important factor for operating nuclear power plants and it is 327 .deg. C when generally operating the reactor in the nuclear power plant(NPP) in case of OPR- 1000. If the CET will exceed 650 .deg. C, Operators in the main control room should be considered to be an accident situation in accordance with a severe accident management guidance(SAMG). The Multi Thermocouple ICI is a new designed ICI assuming severe accident conditions. It consists of four more thermocouples than the existing design, so it has five Ktype thermocouples besides the thermocouple measuring CET is located in the same elevation as the ICI. Each thermocouple is able to be located in the desired location as required. The Multi Thermocouple ICI helps to measure the temperature distribution of the entire reactor. In addition, it will measure certain point of melted core because of the in-vessel debris of nuclear fuel when an accident occurs more seriously. In this paper, to simulate a circumstance such as a nuclear reactor severe accident was examined. In this study, the K-type thermocouples of Multi Thermocouple ICI was confirmed experimentally to be able to measure up to 1370 .deg. C before the thermocouples have been melted. And after the thermocouples were melted by debris, it was able to be monitored that the signal of EMF directed the infinite value of voltage. Therefore through the results of the test, it can be assumed that if any EMF data among the Multi Thermocouple ICI will direct the infinite value

  5. Studies of Behavior Melting Temperature Characteristics for Multi Thermocouple In-Core Instrument Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Donghyup; Chae, Myoungeun; Kim, Sungjin; Lee, Kyulim [Woojin inc, Hwasung (Korea, Republic of)

    2015-05-15

    Bottom-up type in-core instruments (ICIs) are used for the pressurized water reactors of OPR-1000, APR- 1400 in order to measure neutron flux and temperature in the reactor. It is a well-known technique and a proven design using years in the nuclear field. ICI consists of one pair of K-type thermocouple, five self-powered neutron detectors (SPNDs) and one back ground detector. K-type thermocouple's purpose is to measure the core exit temperature (CET) in the reactor. The CET is a very important factor for operating nuclear power plants and it is 327 .deg. C when generally operating the reactor in the nuclear power plant(NPP) in case of OPR- 1000. If the CET will exceed 650 .deg. C, Operators in the main control room should be considered to be an accident situation in accordance with a severe accident management guidance(SAMG). The Multi Thermocouple ICI is a new designed ICI assuming severe accident conditions. It consists of four more thermocouples than the existing design, so it has five Ktype thermocouples besides the thermocouple measuring CET is located in the same elevation as the ICI. Each thermocouple is able to be located in the desired location as required. The Multi Thermocouple ICI helps to measure the temperature distribution of the entire reactor. In addition, it will measure certain point of melted core because of the in-vessel debris of nuclear fuel when an accident occurs more seriously. In this paper, to simulate a circumstance such as a nuclear reactor severe accident was examined. In this study, the K-type thermocouples of Multi Thermocouple ICI was confirmed experimentally to be able to measure up to 1370 .deg. C before the thermocouples have been melted. And after the thermocouples were melted by debris, it was able to be monitored that the signal of EMF directed the infinite value of voltage. Therefore through the results of the test, it can be assumed that if any EMF data among the Multi Thermocouple ICI will direct the infinite value

  6. Field installed brazed thermocouple feedthroughs for high vacuum experiments

    International Nuclear Information System (INIS)

    Anderson, P.; Messick, C.

    1983-01-01

    In order to reduce the occurrence of vacuum leaks and to increase the availability of the DIII vacuum vessel for experimental operation, effort was applied to developing a vacuum-tight brazed feedthrough system for sheathed thermocouples, stainless steel sheathed conductor cables and tubes for cooling fluids. This brazed technique is a replacement for elastomer ''O'' ring sealed feedthroughs that have proven vulnerable to leaks caused by thermal cycling, etc. To date, about 200 feedthroughs have been used. Up to 91 were grouped on a single conflat flange mounted in a bulkhead connector configuration which facilitates installation and removal. Investigation was required to select a suitable braze alloy, flux and installation procedure. Braze alloy selection was challenging since the alloy was required to have: 1) Melting temperature in excess of the 250 0 C (482 0 F) bakeout temperature. 2) No high vapor pressure elements. 3) Good wetting properties when used in air with acceptable flux. 4) Good wettability to 300 series stainless steel and inconel

  7. Temperature measurement: Development work on noise thermometry and improvement of conventional thermocouples for applications in nuclear process heat (PNP)

    International Nuclear Information System (INIS)

    Brixy, H.; Hecker, R.; Oehmen, J.; Barbonus, P.; Hans, R.

    1982-06-01

    The behaviour was studied of NiCr-Ni sheathed thermocouples (sheath Inconel 600 or Incoloy 800, insulation MgO) in a helium and carbon atmosphere at temperatures of 950-1150 deg. C. All the thermocouples used retained their functional performance. The insulation resistance tended towards a limit value which is dependent on the temperature and quality of the thermocouple. Temperature measurements were loaded with great uncertainty in the temperature range of 950-1150 deg. C. Recalibrations at the temperature of 950 deg. C showed errors of up to 6%. Measuring sensors were developed which consist of a sheathed double thermocouple with a noise resistor positioned between the two hot junctions. Using the noise thermometer it is possible to recalibrate the thermocouple at any time in situ. A helium system with a high temperature experimental area was developed to test the thermocouples and the combined thermocouple-noise thermometer sensors under true experimental conditions

  8. Thermocouples calibration and analysis of the influence of the length of the sensor coating

    International Nuclear Information System (INIS)

    Noriega, M; Ramírez, R; López, R; Vaca, M; Morales, J; Terres, H; Lizardi, A; Chávez, S

    2015-01-01

    This paper presents the design and construction of a lab prototype, with a much lower cost compared to the ones commercially sold, enabling the manufacture of thermocouples which are then calibrated to verify their functionality and acceptance. We also analyze the influence of the external insulation over the wires, to determine whether it influences temperature measurement. The tested lengths ranged from 0.00 m up to 0.030 m. The thermocouple was compared against the behavior of a thermocouple of the same type that was purchased with a commercial supplier. The obtained measurement showed less than 1 °C difference in some points. This makes the built thermocouple reliable, since the standard allows a difference of up to 2.2 °C

  9. The use of thermocouples which transmute during service in nuclear reactors

    International Nuclear Information System (INIS)

    Martin, R.E.

    1980-06-01

    Some current nuclear fuel experiments at CRNL require the use of thermocouples to measure temperatures of up to 2200 0 C under reactor operating conditions. A literature search has shown that transient electrical effects and transmutation of the thermocouple alloys can cause temperature measurement errors of up to +-1% and +-30%, respectively. However, the error due to transient electrical effects can be corrected by making temperature measurements immediately following reactor shutdown. Furthermore it has been shown that transmutation effects can be corrected for by calibrating the high temperature tungsten-rhenium thermocouples against a chromel-alumel thermocouple in a cooler part of the experiment. The use of these techniques is expected to reduce temperature measurement errors to +-2% in the best case. (auth)

  10. Thermocouple module halt failure acceptance test procedure for Tank 241-SY-101 DACS-1

    International Nuclear Information System (INIS)

    Ermi, A.M.

    1997-01-01

    The readiness of the Tank 241-SY-101 Data Acquisition and Control System (DACS-1) to provide monitoring and alarms for a halt failure of any thermocouple module will be tested during the performance of this procedure. Updated DACS-1 ''1/0 MODULE HEALTH STATUS'', ''MININ1'', and ''MININ2'' screens, which now provide indication of thermocouple module failure, will also be tested as part of this procedure

  11. Thermocouple calibration facility for 2900 deg C high temperature and its applications

    International Nuclear Information System (INIS)

    Chen Daolong

    1991-01-01

    The construction and the performance characteristic of a 2900 deg C high temperature thermocouple calibration facility are described. The calibration error analysis is made. The test results of the calibration characteristics of high temperature thermocouples Mo/Nb, W-3Re/W-25Re, and W-1Mo/W-25Mo are given. The test result of temperature dependent resistivity of BeO made by this facility is given

  12. Attachment of Free Filament Thermocouples for Temperature Measurements on CMC

    Science.gov (United States)

    Lei, Jih-Fen; Cuy, Michael D.; Wnuk, Stephen P.

    1997-01-01

    Ceramic Matrix Composites (CMC) are being developed for use as enabling materials for advanced aeropropulsion engine and high speed civil transport applications. The characterization and testing of these advanced materials in hostile, high-temperature environments require accurate measurement of the material temperatures. Commonly used wire Thermo-Couples (TC) can not be attached to this ceramic based material via conventional spot-welding techniques. Attachment of wire TC's with commercially available ceramic cements fail to provide sufficient adhesion at high temperatures. While advanced thin film TC technology provides minimally intrusive surface temperature measurement and has good adhesion on the CMC, its fabrication requires sophisticated and expensive facilities and is very time consuming. In addition, the durability of lead wire attachments to both thin film TC's and the substrate materials requires further improvement. This paper presents a newly developed attachment technique for installation of free filament wire TC's with a unique convoluted design on ceramic based materials such as CMC's. Three CMC's (SiC/SiC CMC and alumina/alumina CMC) instrumented with type IC, R or S wire TC's were tested in a Mach 0.3 burner rig. The CMC temperatures measured from these wire TC's were compared to that from the facility pyrometer and thin film TC's. There was no sign of TC delamination even after several hours exposure to 1200 C. The test results proved that this new technique can successfully attach wire TC's on CMC's and provide temperature data in hostile environments. The sensor fabrication process is less expensive and requires very little time compared to that of the thin film TC's. The same installation technique/process can also be applied to attach lead wires for thin film sensor systems.

  13. Experimental measurement of the interfacial heat transfer coefficients of subcooled flow boiling using micro-thermocouple and double directional images

    International Nuclear Information System (INIS)

    Seong-Jin Kim; Goon-Cherl Park

    2005-01-01

    Full text of publication follows: Models or correlations for phase interface are needed to analyze the multi-phase flow. Interfacial heat transfer coefficients are important to constitute energy equation of multi-phase flow, specially. In subcooled boiling flow, bubble condensation at the bubble-liquid interface is a major mechanism of heat transfer within bulk subcooled liquid. Bubble collapse rates and temperatures of each phase are needed to determine the interfacial heat transfer coefficient for bubble condensation. Bubble collapse rates were calculated through image processing in single direction, generally. And in case of liquid bulk temperature, which has been obtained by general temperature sensor such as thermocouple, was used. However, multi-directional images are needed to analyze images due to limitations of single directional image processing. Also, temperature sensor, which has a fast response time, must be used to obtain more accurate interfacial heat transfer coefficient. Low pressure subcooled water flow experiments using micro-thermocouple and double directional image processing with mirrors were conducted to investigate bubble condensation phenomena and to modify interfacial heat transfer correlation. Experiments were performed in a vertical subcooled boiling flow of a rectangular channel. Bubble condensing traces with respect to time were recorded by high speed camera in double direction and bubble collapse rates were calculated by processing recorded digital images. Temperatures were measured by micro-thermocouple, which is a K-type with a 12.7 μm diameter. The liquid temperature was estimated by the developed algorithm to discriminate phases and find each phase temperature in the measured temperature including both liquid and bubble temperature. The interfacial heat transfer coefficient for bubble condensation was calculated from the bubble collapse rates and the estimated liquid temperature, and its correlation was modified. The modified

  14. A novel approach for fault detection and classification of the thermocouple sensor in Nuclear Power Plant using Singular Value Decomposition and Symbolic Dynamic Filter

    International Nuclear Information System (INIS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-01-01

    Highlights: • A novel approach to classify the fault pattern using data-driven methods. • Application of robust reconstruction method (SVD) to identify the faulty sensor. • Analysing fault pattern for plenty of sensors using SDF with less time complexity. • An efficient data-driven model is designed to the false and missed alarms. - Abstract: A mathematical model with two layers is developed using data-driven methods for thermocouple sensor fault detection and classification in Nuclear Power Plants (NPP). The Singular Value Decomposition (SVD) based method is applied to detect the faulty sensor from a data set of all sensors, at the first layer. In the second layer, the Symbolic Dynamic Filter (SDF) is employed to classify the fault pattern. If SVD detects any false fault, it is also re-evaluated by the SDF, i.e., the model has two layers of checking to balance the false alarms. The proposed fault detection and classification method is compared with the Principal Component Analysis. Two case studies are taken from Fast Breeder Test Reactor (FBTR) to prove the efficiency of the proposed method.

  15. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  16. Degradation by radiation of the response of a thermocouple of a fuel element

    International Nuclear Information System (INIS)

    Rodriguez V, A.

    1994-01-01

    In the TRIGA Mark III Reactor of the National Institute of Nuclear Research, is necessary to use an instrumented fuel element for measurement the fuel temperature during pulses of power. This fuel element is exposed to daily temperature gradient of order to 390 Centigrade degrees in normal condition of reactor operation at 1 MW. The experience which this instrumented fuel elements is that useful life of the thermocouples is less then the fuel, because they show important changes in their chemistry composition and electrical specifications, until the point they don't give any response. So is necessary to know the factors that influenced in the shortening of the thermocouples life. The change in composition affects the thermocouple calibration depends on where the changes take place relative to the temperature gradient. The change will be dependent on the neutron flux and so the value of the neutron flux may be used as a measure or the composition change. If there is no neutron flux within the temperature gradient, there will be no composition change, and so the thermocouple calibration will no change. If the neutron flux varies within the region in which a temperature gradients exists, the composition of the thermocouple will vary and the calibration will change. But the maximum change in calibration will occur if the neutron flux is high and constant within the region of the temperature gradient. In this case, a composition change takes place which is uniform throughout the gradient and so the emf output can be expected to change. In this reactor, the thermocouples are in the second case. Then, the relative position of the thermal and neutron flux gradients are the most important factor that explain the composition change after or 2,500 times of exposing the thermocouples to the temperature gradients of order to 390 Centigrade degrees. (Author)

  17. Evaluation Model for Sentient Cities

    Directory of Open Access Journals (Sweden)

    Mª Florencia Fergnani Brion

    2016-11-01

    Full Text Available In this article we made a research about the Sentient Cities and produced an assessment model to analyse if a city is or could be potentially considered one. It can be used to evaluate the current situation of a city before introducing urban policies based on citizen participation in hybrid environments (physical and digital. To that effect, we've developed evaluation grids with the main elements that form a Sentient City and their measurement values. The Sentient City is a variation of the Smart City, also based on technology progress and innovation, but where the citizens are the principal agent. In this model, governments aim to have a participatory and sustainable system for achieving the Knowledge Society and Collective Intelligence development, as well as the city’s efficiency. Also, they increase communication channels between the Administration and citizens. In this new context, citizens are empowered because they have the opportunity to create a Local Identity and transform their surroundings through open and horizontal initiatives.

  18. The Spiral-Interactive Program Evaluation Model.

    Science.gov (United States)

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  19. Investigation of pool boiling dynamics on a rectangular heater using nano-thermocouples: is it chaotic or stochastic?

    Energy Technology Data Exchange (ETDEWEB)

    Sathyamurthi, Vijaykumar; Banerjee, Debjyoti [Texas A and M University, College Station, TX (United States). Dept. of Mechanical Engineering], e-mail: dbanerjee@tamu.edu

    2009-07-01

    The non-linear dynamical model of pool boiling on a horizontal rectangular heater is assessed from experimental results in this study. Pool boiling experiments are conducted over a horizontal rectangular silicon substrate measuring 63 mm x 35 mm with PF-5060 as the test fluid. Novel nano-thermocouples, micro-machined in-situ on the silicon substrate are used to measure the surface temperature fluctuations for steady state pool boiling. The acquisition frequency for temperature data from the nano-thermocouples is 1 k Hz. The surface temperature fluctuations are analyzed using the TISEAN{sup c} package. A time-delay embedding is employed to generate higher dimensional phase-space vectors from the temperature time series record. The optimal delay is determined from the first minimum of the mutual information function. Techniques such as recurrence plots, and false nearest neighbors tests are employed to assess the presence of deterministic chaotic dynamics. Chaos quantifiers such as correlation dimensions are found for various pool boiling regimes using the raw data as well as noise-reduced data. Additionally, pseudo-phase spaces are used to reconstruct the 'attractors'. The results after non-linear noise reduction shows definitive presence of low-dimensional (d {<=} 7) chaos in fully developed nucleate boiling, at critical heat flux and in film boiling. (author)

  20. Investigation of pool boiling dynamics on a rectangular heater using nano-thermocouples: is it chaotic or stochastic?

    International Nuclear Information System (INIS)

    Sathyamurthi, Vijaykumar; Banerjee, Debjyoti

    2009-01-01

    The non-linear dynamical model of pool boiling on a horizontal rectangular heater is assessed from experimental results in this study. Pool boiling experiments are conducted over a horizontal rectangular silicon substrate measuring 63 mm x 35 mm with PF-5060 as the test fluid. Novel nano-thermocouples, micro-machined in-situ on the silicon substrate are used to measure the surface temperature fluctuations for steady state pool boiling. The acquisition frequency for temperature data from the nano-thermocouples is 1 k Hz. The surface temperature fluctuations are analyzed using the TISEAN c package. A time-delay embedding is employed to generate higher dimensional phase-space vectors from the temperature time series record. The optimal delay is determined from the first minimum of the mutual information function. Techniques such as recurrence plots, and false nearest neighbors tests are employed to assess the presence of deterministic chaotic dynamics. Chaos quantifiers such as correlation dimensions are found for various pool boiling regimes using the raw data as well as noise-reduced data. Additionally, pseudo-phase spaces are used to reconstruct the 'attractors'. The results after non-linear noise reduction shows definitive presence of low-dimensional (d ≤ 7) chaos in fully developed nucleate boiling, at critical heat flux and in film boiling. (author)

  1. Cladding temperature measurement by thermocouples at preirradiated LWR fuel rod samples

    International Nuclear Information System (INIS)

    Leiling, W.

    1981-12-01

    This report describes the technique to measure cladding temperatures of test fuel rod samples, applied during the in-pile tests on fuel rod failure in the steam loop of the FR2 reactor. NiCr/Ni thermocouples with stainless steel and Inconel sheaths, respectively,of 1 mm diameter were resistance spot weld to the outside of the fuel rod cladding. For the pre-irradiated test specimens, welding had to be done under hot-cell conditions, i.e. under remote handling. In order to prevent the formation of eutectics between zirconium and the chemical elements of the thermocouple sheath at elevated temperatures, the thermocouples were covered with a platinum jacket of 1.4 mm outside diameter swaged onto the sheath in the area of the measuring junction. This thermocouple design has worked satisfactorily in the in-pile experiments performed in a steam atmosphere. Even in the heatup phase, in which cladding temperatures up to 1050 0 C were reached, only very few failures occured. This good performance is to a great part due to a careful control and a thorough inspection of the thermocouples. (orig.) [de

  2. Establishment of the Co-C Eutectic Fixed-Point Cell for Thermocouple Calibrations at NIMT

    Science.gov (United States)

    Ongrai, O.; Elliott, C. J.

    2017-08-01

    In 2015, NIMT first established a Co-C eutectic temperature reference (fixed-point) cell measurement capability for thermocouple calibration to support the requirements of Thailand's heavy industries and secondary laboratories. The Co-C eutectic fixed-point cell is a facility transferred from NPL, where the design was developed through European and UK national measurement system projects. In this paper, we describe the establishment of a Co-C eutectic fixed-point cell for thermocouple calibration at NIMT. This paper demonstrates achievement of the required furnace uniformity, the Co-C plateau realization and the comparison data between NIMT and NPL Co-C cells by using the same standard Pt/Pd thermocouple, demonstrating traceability. The NIMT measurement capability for noble metal type thermocouples at the new Co-C eutectic fixed point (1324.06°C) is estimated to be within ± 0.60 K (k=2). This meets the needs of Thailand's high-temperature thermocouple users—for which previously there has been no traceable calibration facility.

  3. R and D advances in high temperature thermocouples for nuclear utilization in severe environment

    International Nuclear Information System (INIS)

    Schley, R.; Blanc, J.Y.

    1984-09-01

    Safety experiments for water reactors in Cadarache have made necessary a research program for developing special thermocouples for use in severe fuel damage conditions (superheated steam). Standard cladding thermocouples (type K, alumina insulated, zircaloy sheathed, O.D. 0.7 mm) must be replaced by others with W3Re versus W25Re legs, Ta sheath protected by a zircaloy outer sheath, and hafnia or thoria insulation. The zircaloy sheath will be sufficient to protect correctly tantalum. Fuel centerline thermocouples have W5Re versus W26Re or W3Re versus W25Re legs, hard-fired thoria insulation and rhenium CVD sheath (O.D. 1.1 mm). A protective ReSi 2 coating is applied. This protection withstands at least 1600 0 C, 45 minutes in steam. Tests are done-concerning: a) materials compatibilities in helium between 1400 0 C and 2000 0 C, b) prototypes qualification (in Saclay or Grenoble), c) determination of errors due to degradation of insulation resistance of thermocouples cables (with magnesia, hafnia, alumina), d) Ir or Re protective coatings by CVD process, other coatings by ionic bombardment, etc... A completely new type of hot junction has been patented. Future works will include: completion of these tests, Mo-Nb alloys thermocouples legs realization withstanding heavy neutronic fluence, and use of ceramics glues

  4. Relocation work of temporary thermocouples for measuring the vessel cooling system in the safety demonstration test

    International Nuclear Information System (INIS)

    Shimazaki, Yosuke; Shinohara, Masanori; Ono, Masato; Yanagi, Shunki; Tochio, Daisuke; Iigaki, Kazuhiko

    2012-05-01

    It is necessary to confirm that the temperature of water cooling panel of the vessel cooling system (VCS) is controlled under the allowable working temperature during the safety demonstration test because the water cooling panel temperature rises due to stop of cooling water circulation pumps. Therefore, several temporary thermocouples are relocated to the water cooling panel near the stabilizers of RPV and the side cooling panel outlet ring header of VCS in order to observe the temperature change of VCS. The relocated thermocouples can measure the temperature change with starting of the cooling water circulation pumps of VCS. So it is confirmed that the relocated thermocouples can observe the VCS temperature change in the safety demonstration test. (author)

  5. Difference equation approach to two-thermocouple sensor characterization in constant velocity flow environments

    International Nuclear Information System (INIS)

    Hung, P.C.; Irwin, G.; Kee, R.; McLoone, S.

    2005-01-01

    Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation (DE) approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples

  6. Evaluation of Models of the Reading Process.

    Science.gov (United States)

    Balajthy, Ernest

    A variety of reading process models have been proposed and evaluated in reading research. Traditional approaches to model evaluation specify the workings of a system in a simplified fashion to enable organized, systematic study of the system's components. Following are several statistical methods of model evaluation: (1) empirical research on…

  7. Reply to ''Comment on 'Thermocouple temperature measurements in shock-compressed solids' ''

    International Nuclear Information System (INIS)

    Bloomquist, D.D.; Sheffield, S.A.

    1982-01-01

    We disagree with the interpretation offered in the above comment. The suggestion was made that the anomalously fast response of thin-foil thermocouples reported previously is the result of strain dependence of the thermocouple response and not shock enhanced thermal equilibration. Although the emplacement geometry has a profound effect on the response of embedded thin-foil temperature gauges as noted in the above comment, the evidence presented, along with recent results discussed in this reply, do not support the conclusions presented in the above comment

  8. Accuracy of small diameter sheathed thermocouples for the core flow test loop

    International Nuclear Information System (INIS)

    Anderson, R.L.; Kollie, T.G.

    1979-04-01

    This report summarizes the research and development on 0.5-mm-diameter, compacted, metal sheathed thermocouples. The objectives of this research effort have been: to identify and analyze the sources of temperature measurement errors in the use of 0.5-mm-diameter sheathed thermocouples to measure the surface temperature of the cladding of fuel-rod simulators in the Core Flow Test Loop (CFTL) at ORNL; to devise methods for reducing or correcting for these temperature measurement errors; to estimate the overall temperature measurement uncertainties; and to recommend modifications in the manufacture, installation, or materials used to minimize temperature measurement uncertainties in the CFTL experiments

  9. Development of thermocouple re-instrumentation technique for irradiated fuel rod. Techniques for making center hole into UO2 pellets and thermocouple re-instrumentation to fuel rod

    International Nuclear Information System (INIS)

    Shimizu, Michio; Saito, Junichi; Oshima, Kunio

    1995-07-01

    The information on FP gas pressure and centerline temperature of fuel pellets during power transient is important to study the pellet clad interaction (PCI) mechanism of high burnup LWR fuel rods. At the Department of JMTR, a re-instrumentation technique of FP gas pressure gage for an irradiated fuel rod was developed in 1990. Furthermore, a thermocouple re-instrumentation technique was successfully developed in 1994. Two steps were taken to carry out the development program of the thermocouple re-instrumentation technique. In the first step, a drilling technique was developed for making a center hole of the irradiated fuel pellets. Various drilling tests were carried out using dummy of fuel rods consisted of Ba 2 FeO 3 pellets and Zry-2 cladding. On this work it is important to keep the pellets just the state cracked at a power reactor. In these tests, the technique to fix the pellets by frozen CO 2 was used during the drilling work. Also, diamond drills were used to make the center hole. These tests were completed successfully. A center hole, 54mm depth and 2.5mm diameter, was realized by these methods. The second step of this program is the in-pile demonstration test on an irradiated fuel rod instrumented dually a thermocouple and FP gas pressure gage. The demonstration test was carried out at the JMTR in 1995. (author)

  10. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  11. Home-made temperature monitoring system from four-channel K-type thermocouples via internet of thing technology platform

    Science.gov (United States)

    Detmod, Thitaporn; Özmen, Yiǧiter; Songkaitiwong, Kittiphot; Saenyot, Khanuengchat; Locharoenrat, Kitsakorn; Lekchaum, Sarai

    2018-06-01

    This paper is aimed to design and construct the home-made temperature monitoring system from four-channel K-type thermocouples in order to improve the temperature measurement based on standard evaluation measurements guidance. The temperature monitoring system was capable to record the temperature on SD card and to display the realtime temperature on Internet of Thing Technology platform. The temperature monitoring system was tested in terms of the temperature measurement accuracy and delay response time. It was found that a standard deviation was acceptable as compared to the Instrument Society of America. The response time of the microcontroller to SD card was 2 sec faster than that of the microcontroller to Thingspeak.

  12. Experiences with W3Re/W25Re thermocouples in fuel pins of NS Otto Hahn's two cores

    International Nuclear Information System (INIS)

    Kolb, M.

    1975-01-01

    Applications and performance of thermocouples in the Otto Hahn reactor are presented. The measurement of effective thermocouple time constants and of fuel rod heat transfer time constants utilizing the reactor noise and the resulting small temperature fluctuations which has become practical by the advent of modern noise analysis systems, is dealt with

  13. Low drift type N thermocouples in out-of-pile advanced gas reactor mock-up test: metallurgical analysis

    International Nuclear Information System (INIS)

    Scervini, M.; Palmer, J.; Haggard, D.C.; Swank, W.D.

    2015-01-01

    Thermocouples are the most commonly used sensors for temperature measurement in nuclear reactors. They are crucial for the control of current nuclear reactors and for the development of GEN IV reactors. In nuclear applications thermocouples are strongly affected by intense neutron fluxes. As a result of the interaction with neutrons, the thermoelements of the thermocouples undergo transmutation, which produces a time dependent change in composition and, as a consequence, a time dependent drift of the thermocouple signal. Thermocouple drift can be very significant for in-pile temperature measurements and may render the temperature sensors unreliable after exposure to nuclear radiation for relatively short times compared to the life required for temperature sensors in nuclear applications. Previous experiences with type K thermocouples in nuclear reactors have shown that they are affected by neutron irradiation only to a limited extent. Similarly type N thermocouples are expected to be only slightly affected by neutron fluxes. Currently the use of Nickel based thermocouples is limited to temperatures lower than 1000 deg. C due to drift related to phenomena other than nuclear irradiation. As part of a collaboration between Idaho National Laboratory (INL) and the University of Cambridge a variety of Type N thermocouples have been exposed at INL in an Advanced Gas Reactor mock-up test at 1150 deg. C for 2000 h, 1200 deg. C for 2000 h, 125 deg. C for 200 h and 1300 deg. C for 200 h, and later analysed metallurgically at the University of Cambridge. The use of electron microscopy allows to identify the metallurgical changes occurring in the thermocouples during high temperature exposure and correlate the time dependent thermocouple drift with the microscopic changes experienced by the thermoelements of different thermocouple designs. In this paper conventional Inconel 600 sheathed type N thermocouples and a type N using a customized sheath developed at the University of

  14. Low drift type N thermocouples in out-of-pile advanced gas reactor mock-up test: metallurgical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Scervini, M. [University of Cambridge, Department of Materials Science and Metallurgy, 27 Charles Babbage Road, CB30FS Cambridge, (United Kingdom); Palmer, J.; Haggard, D.C.; Swank, W.D. [Idaho National Laboratory, Idaho Falls, ID 83415-3840, (United States)

    2015-07-01

    Thermocouples are the most commonly used sensors for temperature measurement in nuclear reactors. They are crucial for the control of current nuclear reactors and for the development of GEN IV reactors. In nuclear applications thermocouples are strongly affected by intense neutron fluxes. As a result of the interaction with neutrons, the thermoelements of the thermocouples undergo transmutation, which produces a time dependent change in composition and, as a consequence, a time dependent drift of the thermocouple signal. Thermocouple drift can be very significant for in-pile temperature measurements and may render the temperature sensors unreliable after exposure to nuclear radiation for relatively short times compared to the life required for temperature sensors in nuclear applications. Previous experiences with type K thermocouples in nuclear reactors have shown that they are affected by neutron irradiation only to a limited extent. Similarly type N thermocouples are expected to be only slightly affected by neutron fluxes. Currently the use of Nickel based thermocouples is limited to temperatures lower than 1000 deg. C due to drift related to phenomena other than nuclear irradiation. As part of a collaboration between Idaho National Laboratory (INL) and the University of Cambridge a variety of Type N thermocouples have been exposed at INL in an Advanced Gas Reactor mock-up test at 1150 deg. C for 2000 h, 1200 deg. C for 2000 h, 125 deg. C for 200 h and 1300 deg. C for 200 h, and later analysed metallurgically at the University of Cambridge. The use of electron microscopy allows to identify the metallurgical changes occurring in the thermocouples during high temperature exposure and correlate the time dependent thermocouple drift with the microscopic changes experienced by the thermoelements of different thermocouple designs. In this paper conventional Inconel 600 sheathed type N thermocouples and a type N using a customized sheath developed at the University of

  15. Computer subroutines to aid analysis of experimental data from thermocouples and pressure transducers

    International Nuclear Information System (INIS)

    Durham, M.E.

    1976-08-01

    Three subroutines (CALSET, CALBR8 and PTRCAL) have been written to provide a convenient system for converting experimental measurements obtained from thermocouples and pressure transducers to temperatures and pressures. The method of operation and the application of the subroutines are described. (author)

  16. Measuring skin temperature before, during and after exercise: a comparison of thermocouples and infrared thermography

    International Nuclear Information System (INIS)

    Fernandes, Alex de Andrade; Amorim, Paulo Roberto dos Santos; De Moura, Anselmo Gomes; Moreira, Danilo Gomes; Costa, Carlos Magno Amaral; Marins, João Carlos Bouzas; Brito, Ciro José; Sillero-Quintana, Manuel

    2014-01-01

    Measuring skin temperature (T SK ) provides important information about the complex thermal control system and could be interesting when carrying out studies about thermoregulation. The most common method to record T SK  involves thermocouples at specific locations; however, the use of infrared thermal imaging (IRT) has increased. The two methods use different physical processes to measure T SK , and each has advantages and disadvantages. Therefore, the objective of this study was to compare the mean skin temperature (MT SK ) measurements using thermocouples and IRT in three different situations: pre-exercise, exercise and post-exercise. Analysis of the residual scores in Bland–Altman plots showed poor agreement between the MT SK  obtained using thermocouples and those using IRT. The averaged error was −0.75 °C during pre-exercise, 1.22 °C during exercise and −1.16 °C during post-exercise, and the reliability between the methods was low in the pre- (ICC = 0.75 [0.12 to 0.93]), during (ICC = 0.49 [−0.80 to 0.85]) and post-exercise (ICC = 0.35 [−1.22 to 0.81] conditions. Thus, there is poor correlation between the values of MT SK  measured by thermocouples and IRT pre-exercise, exercise and post-exercise, and low reliability between the two forms of measurement. (paper)

  17. Direct Measurement of Neutral/Ion Beam Power using Thermocouple Analysis

    International Nuclear Information System (INIS)

    Day, I.; Gee, S.

    2006-01-01

    Modern Neutral Beam Injection systems such as those used on JET and MAST routinely use thermocouples embedded close to the surface of beam stopping elements, such as calorimeters and ion dumps, coupled to high speed data acquisition systems to determine beam profile and position from temperature rise data. With the availability of low cost data acquisition and storage systems it is now possible to record data from all thermocouples in a fully instrumented calorimeter or ion dump on 20 ms timescales or better. This sample rate is sufficiently fast to enable the thermocouple data to be used to calculate the incident power density from 1d heat transfer theory. This power density data coupled with appropriate Gaussian fits enables the determination of the 2d beam profile and thus allows an instantaneous and direct measurement of beam power. The theory and methodology required to analyse the fast thermocouple data from the MAST calorimeter and residual ion dump thermocouples is presented and direct measurements of beam power density are demonstrated. The power of desktop computers allows such analysis to be carried out virtually instantaneously. The methods used to automate this analysis are discussed in detail. A code, utilising the theory and methodology, has been developed to allow immediate measurements of beam power on a pulse by pulse basis. The uncertainty in determining the beam power density is shown to be less than 10 %. This power density data is then fitted to a 2d Gaussian beam profile and integrated to establish the total beam power. Results of this automated analysis for the neutral beam and residual ion power of the MAST duopigatron and PINI NBI systems are presented. This technology could be applied to a beam power safety interlock system. The application to a beam shine through protection system for the inner wall of the JET Tokamak is discussed as an example. (author)

  18. Thermal Recovery from Cold-Working in Type K Bare-Wire Thermocouples

    Science.gov (United States)

    Greenen, A. D.; Webster, E. S.

    2017-12-01

    Cold-working of most thermocouples has a significant, direct impact on the Seebeck coefficient which can lead to regions of thermoelectric inhomogeneity and accelerated drift. Cold-working can occur during the wire swaging process, when winding the wire onto a bobbin, or during handling by the end user—either accidentally or deliberately. Swaging-induced cold-work in thermocouples, if uniformly applied, may result in a high level of homogeneity. However, on exposure to elevated temperatures, the subsequent recovery process from the cold-working can then result in significant drift, and this can in turn lead to erroneous temperature measurements, often in excess of the specified manufacturer tolerances. Several studies have investigated the effects of cold-work in Type K thermocouples usually by bending, or swaging. However, the amount of cold-work applied to the thermocouple is often difficult to quantify, as the mechanisms for applying the strains are typically nonlinear when applied in this fashion. A repeatable level of cold-working is applied to the different wires using a tensional loading apparatus to apply a known yield displacement to the thermoelements. The effects of thermal recovery from cold-working can then be accurately quantified as a function of temperature, using a linear gradient furnace and a high-resolution homogeneity scanner. Variation in these effects due to differing alloy compositions in Type K wire is also explored, which is obtained by sourcing wire from a selection of manufacturers. The information gathered in this way will inform users of Type K thermocouples about the potential consequences of varying levels of cold-working and its impact on the Seebeck coefficient at a range of temperatures between ˜ 70°C and 600° C. This study will also guide users on the temperatures required to rapidly alleviate the effects of cold-working using thermal annealing treatments.

  19. Long Hole Film Cooling Dataset for CFD Development . Part 1; Infrared Thermography and Thermocouple Surveys

    Science.gov (United States)

    Shyam, Vikram; Thurman, Douglas; Poinsatte, Phillip; Ameri, Ali; Eichele, Peter; Knight, James

    2013-01-01

    An experiment investigating flow and heat transfer of long (length to diameter ratio of 18) cylindrical film cooling holes has been completed. In this paper, the thermal field in the flow and on the surface of the film cooled flat plate is presented for nominal freestream turbulence intensities of 1.5 and 8 percent. The holes are inclined at 30deg above the downstream direction, injecting chilled air of density ratio 1.0 onto the surface of a flat plate. The diameter of the hole is 0.75 in. (0.01905 m) with center to center spacing (pitch) of 3 hole diameters. Coolant was injected into the mainstream flow at nominal blowing ratios of 0.5, 1.0, 1.5, and 2.0. The Reynolds number of the freestream was approximately 11,000 based on hole diameter. Thermocouple surveys were used to characterize the thermal field. Infrared thermography was used to determine the adiabatic film effectiveness on the plate. Hotwire anemometry was used to provide flowfield physics and turbulence measurements. The results are compared to existing data in the literature. The aim of this work is to produce a benchmark dataset for Computational Fluid Dynamics (CFD) development to eliminate the effects of hole length to diameter ratio and to improve resolution in the near-hole region. In this report, a Time-Filtered Navier Stokes (TFNS), also known as Partially Resolved Navier Stokes (PRNS), method that was implemented in the Glenn-HT code is used to model coolant-mainstream interaction. This method is a high fidelity unsteady method that aims to represent large scale flow features and mixing more accurately.

  20. Thermocouple psychrometer measurements of in situ water potential changes in heated welded tuff

    International Nuclear Information System (INIS)

    Mao, Nai-hsien; Wang, H.F.

    1991-05-01

    Ten thermocouple psychrometers (TCPs) to measure water potential (WP) were installed in three holes in G-Tunnel at the Nevada Test Site as part of the Prototype Engineered Barrier System Field Tests. These integrated tests measured several parameters as a function of location and time within a few meters of a heater emplaced in welded tuff. The primary goal of the TCP experiment was to find out whether the combination of laboratory calibration and field use of the TCP can provide useful data for determining the change of moisture condition in the field. We calibrated the TCPs in NaCl solutions up to 80 degree C(176 degree F) in the laboratory. In two holes, we used rubber sleeves and packers to house TCPs, and in the third hole, we used foam. All three holes were grouted behind the TCP assemblages. Field results of the heater test showed that small temperature gradients were present for all measurements. Nevertheless, the WP calibration made the necessary correction for the nonisothermal condition. A drying and re-wetting cycle peaked at about day 140 with a WP of -65 bar in borehole P3, located below the heater. A similar cycle but reduced in scale was found at about day 175 with a WP of -45 bar in borehole P2, above the heater. This difference in drying behavior above and below the heater was also observed from neutron data and was explained as a gravity effect. As temperatures increased, the evaporation rate of pore water increased, In unfractured rock, the gas-phase flow was primarily outward. Water condensed above the heater would drain back to keep the boiling region wet, but water condensed below the heater would drain away from the boiling region. This conceptual model explained both the time and magnitude differences for data from holes above and below the heater. 7 refs., 14 figs., 2 tabs

  1. Educational game models: conceptualization and evaluation ...

    African Journals Online (AJOL)

    Educational game models: conceptualization and evaluation. ... The Game Object Model (GOM), that marries educational theory and game design, forms the basis for the development of the Persona Outlining ... AJOL African Journals Online.

  2. The EMEFS model evaluation. An interim report

    Energy Technology Data Exchange (ETDEWEB)

    Barchet, W.R. [Pacific Northwest Lab., Richland, WA (United States); Dennis, R.L. [Environmental Protection Agency, Research Triangle Park, NC (United States); Seilkop, S.K. [Analytical Sciences, Inc., Durham, NC (United States); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. [Atmospheric Environment Service, Downsview, ON (Canada); Byun, D.; McHenry, J.N. [Computer Sciences Corp., Research Triangle Park, NC (United States); Karamchandani, P.; Venkatram, A. [ENSR Consulting and Engineering, Camarillo, CA (United States); Fung, C.; Misra, P.K. [Ontario Ministry of the Environment, Toronto, ON (Canada); Hansen, D.A. [Electric Power Research Inst., Palo Alto, CA (United States); Chang, J.S. [State Univ. of New York, Albany, NY (United States). Atmospheric Sciences Research Center

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  3. Evaluation of a decontamination model

    International Nuclear Information System (INIS)

    Rippin, D.W.T.; Hanulik, J.; Schenker, E.; Ullrich, G.

    1981-02-01

    In the scale-up of a laboratory decontamination process difficulties arise due to the limited understanding of the mechanisms controlling the process. This paper contains some initial proposals which may contribute to the quantitative understanding of the chemical and physical factors which influence decontamination operations. General features required in a mathematical model to describe a fluid-solid reaction are discussed, and initial work is presented with a simple model which has had some success in describing the observed laboratory behaviour. (Auth.)

  4. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  5. Site descriptive modelling - strategy for integrated evaluation

    International Nuclear Information System (INIS)

    Andersson, Johan

    2003-02-01

    The current document establishes the strategy to be used for achieving sufficient integration between disciplines in producing Site Descriptive Models during the Site Investigation stage. The Site Descriptive Model should be a multidisciplinary interpretation of geology, rock mechanics, thermal properties, hydrogeology, hydrogeochemistry, transport properties and ecosystems using site investigation data from deep bore holes and from the surface as input. The modelling comprise the following iterative steps, evaluation of primary data, descriptive and quantitative modelling (in 3D), overall confidence evaluation. Data are first evaluated within each discipline and then the evaluations are checked between the disciplines. Three-dimensional modelling (i.e. estimating the distribution of parameter values in space and its uncertainty) is made in a sequence, where the geometrical framework is taken from the geological model and in turn used by the rock mechanics, thermal and hydrogeological modelling etc. The three-dimensional description should present the parameters with their spatial variability over a relevant and specified scale, with the uncertainty included in this description. Different alternative descriptions may be required. After the individual discipline modelling and uncertainty assessment a phase of overall confidence evaluation follows. Relevant parts of the different modelling teams assess the suggested uncertainties and evaluate the feedback. These discussions should assess overall confidence by, checking that all relevant data are used, checking that information in past model versions is considered, checking that the different kinds of uncertainty are addressed, checking if suggested alternatives make sense and if there is potential for additional alternatives, and by discussing, if appropriate, how additional measurements (i.e. more data) would affect confidence. The findings as well as the modelling results are to be documented in a Site Description

  6. Rock mechanics models evaluation report: Draft report

    International Nuclear Information System (INIS)

    1985-10-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The end result of the KT analysis is a balanced, documented recommendation of the codes and models which are best suited to conceptual subsurface design for the salt repository. The various laws for modeling the creep of rock salt are also reviewed in this report. 37 refs., 1 fig., 7 tabs

  7. External attachment of titanium sheathed thermocouples to zirconium nuclear fuel rods for the LOFT reactor

    International Nuclear Information System (INIS)

    Welty, R.K.

    1980-01-01

    The Exxon Nuclear Company, Inc., acting as a Subcontractor to EG and G Idaho Inc., Idaho National Engineering Laboratory, Idaho Falls, Idaho, has developed a welding process to attach titanium sheathed thermocouples to the outside of the zircaloy clad fuel rods. The fuel rods and thermocouples are used to test simulated loss-of-coolant accident (LOCA) conditions in a pressurized water reactor (LOFT Reactor, Idaho National Laboratory). A laser beam was selected as the optimum welding process because of the extremely high energy input per unit volume that can be achieved allowing local fusion of a small area irrespective of the difference in material thickness to be joined. A commercial pulsed laser and energy control system was installed along with specialized welding fixtures. Laser room facility requirements and tolerances were established. Performance qualifications, and detailed welding procedures were also developed. Product performance tests were conducted to assure that engineering design requirements could be met on a production basis

  8. Proposed algorithm for determining the delta intercept of a thermocouple psychrometer curve

    International Nuclear Information System (INIS)

    Kurzmack, M.A.

    1993-01-01

    The USGS Hydrologic Investigations Program is currently developing instrumentation to study the unsaturated zone at Yucca Mountain in Nevada. Surface-based boreholes up to 2,500 feet in depth will be drilled, and then instrumented in order to define the water potential field within the unsaturated zone. Thermocouple psychrometers will be used to monitor the in-situ water potential. An algorithm is proposed for simply and efficiently reducing a six wire thermocouple psychrometer voltage output curve to a single value, the delta intercept. The algorithm identifies a plateau region in the psychrometer curve and extrapolates a linear regression back to the initial start of relaxation. When properly conditioned for the measurements being made, the algorithm results in reasonable results even with incomplete or noisy psychrometer curves over a 1 to 60 bar range

  9. Attachment of Free Filament Thermocouples for Temperature Measurements on Ceramic Matrix Composites

    Science.gov (United States)

    Lei, Jih-Fen; Cuy, Michael D.; Wnuk, Stephen P.

    1998-01-01

    At the NASA Lewis Research Center, a new installation technique utilizing convoluted wire thermocouples (TC's) was developed and proven to produce very good adhesion on CMC's, even in a burner rig environment. Because of their unique convoluted design, such TC's of various types and sizes adhere to flat or curved CMC specimens with no sign of delamination, open circuits, or interactions-even after testing in a Mach 0.3 burner rig to 1200 C (2200 F) for several thermal cycles and at several hours at high temperatures. Large differences in thermal expansion between metal thermocouples and low-expansion materials, such as CMC's, normally generate large stresses in the wires. These stresses cause straight wires to detach, but convoluted wires that are bonded with strips of coating allow bending in the unbonded portion to relieve these expansion stresses.

  10. Evaluation of animal models of neurobehavioral disorders

    Directory of Open Access Journals (Sweden)

    Nordquist Rebecca E

    2009-02-01

    Full Text Available Abstract Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to

  11. R and D advances in high temperature thermocouples for nuclear utilization in severe environment

    International Nuclear Information System (INIS)

    Schley, R.; Blanc, J.Y.

    1985-01-01

    Safety experiments for water reactors in Cadarache have made necessary a research program for developing special thermocopules for use in severe fuel damage conditions (superheated steam). Standard cladding thermocouples (type K, alumina insulated, zircaloy sheathed, O.D. 0.7 mm) must be replaced by others with W3Re versus W25Re legs, Ta sheath protected by a zircaloy outer sheath, and hafnia or thoria insulation. The zircaloy sheath will be sufficient to protect correctly tantalum. Fuel centerline thermocouples have W5Re versus W26Re or W3Re versus W25Re legs, hard-fired thoria insulation and rhenium CVD sheath (O.D. 1.1 mm). A protective ReSi/sub 2/ coating is applied. This protection withstands at least 1600 0 C, 45 minutes in steam. Tests are done concerning: (a) materials compatibilities in helium between 1400 0 C and 2000 0 C, (b) prototypes qualification (In Saclay or Grenoble), (c) determination of errors due to degradation of insulation resistance of thermocouples cables (with magnesia, hafnia, alumina), (d) Ir or Re protective coatings by CVD process, other coatings by ionic bombardment, etc...A completely new type of hot junction has been patented

  12. A Highly Thermostable In2O3/ITO Thin Film Thermocouple Prepared via Screen Printing for High Temperature Measurements

    Directory of Open Access Journals (Sweden)

    Yantao Liu

    2018-03-01

    Full Text Available An In2O3/ITO thin film thermocouple was prepared via screen printing. Glass additives were added to improve the sintering process and to increase the density of the In2O3/ITO films. The surface and cross-sectional images indicate that both the grain size and densification of the ITO and In2O3 films increased with the increase in annealing time. The thermoelectric voltage of the In2O3/ITO thermocouple was 53.5 mV at 1270 °C at the hot junction. The average Seebeck coefficient of the thermocouple was calculated as 44.5 μV/°C. The drift rate of the In2O3/ITO thermocouple was 5.44 °C/h at a measuring time of 10 h at 1270 °C.

  13. Thermocouples used in emission systems of internal combustion engines; Thermoelemente fuer den Einsatz in Abgassystemen von Verbrennungsmotoren

    Energy Technology Data Exchange (ETDEWEB)

    Augustin, Silke; Froehlich, Thomas; Mammen, Helge [Technische Univ. Illmenau (Germany). Inst. fuer Prozessmess- und Sensortechnik; Ament, Christoph; Guether, Thomas [Technische Univ. Illmenau (Germany). Inst. fuer Automatisierungs- und Systemtechnik

    2012-11-01

    Thermocouples used in exhaust systems of combustion engines are exposed to high temperature gradients and temperature leaps ({Delta}T > 900 K), high flow speeds and pressure. When constructing these thermocouples, a compromise is needed between the resulting high demands on the mechanical-thermal stability, accuracy and the fast response time demanded by the servo-control of the motors. Additionally, a numerical correction of the measured signal may contribute to an improved sensor dynamics. (orig.)

  14. Individual model evaluation and probabilistic weighting of models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-01-01

    This note stresses the importance of trying to assess the accuracy of each model individually. Putting a Bayesian probability distribution on a population of models faces conceptual and practical complications, and apparently can come only after the work of evaluating the individual models. Moreover, the primary issue is open-quotes How good is this modelclose quotes? Therefore, the individual evaluations are first in both chronology and importance. They are not easy, but some ideas are given here on how to perform them

  15. Evaluation of green house gas emissions models.

    Science.gov (United States)

    2014-11-01

    The objective of the project is to evaluate the GHG emissions models used by transportation agencies and industry leaders. Factors in the vehicle : operating environment that may affect modal emissions, such as, external conditions, : vehicle fleet c...

  16. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Tramontano, Anna

    2009-01-01

    established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic

  17. A preliminary study of factors affecting the calibration stability of the iridium versus iridium-40 percent rhodium thermocouple

    Science.gov (United States)

    Ahmed, Shaffiq; Germain, Edward F.; Daryabeigi, Kamran; Alderfer, David W.; Wright, Robert E.

    1987-01-01

    An iridium versus iridium-40% rhodium thermocouple was studied. Problems associated with the use of this thermocouple for high temperature applications (up to 2000 C) were investigated. The metallurgical studies included X-ray, macroscopic, resistance, and metallographic studies. The thermocouples in the as-received condition from the manufacturer revealed large amounts of internal stress caused by cold working during manufacturing. The thermocouples also contained a large amount of inhomogeneities and segregations. No phase transformations were observed in the alloy up to 1100 C. It was found that annealing the thermocouple at 1800 C for two hours, and then at 1400 C for 2 to 3 hours yielded a fine grain structure, relieving some of the strains, and making the wire more ductile. It was also found that the above annealing procedure stabilized the thermal emf behavior of the thermocouple for application below 1800 C (an improvement from + or - 1% to + or - 0.02% within the range of the test parameters used).

  18. Metrics for Evaluation of Student Models

    Science.gov (United States)

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  19. An evaluation of BPMN modeling tools

    NARCIS (Netherlands)

    Yan, Z.; Reijers, H.A.; Dijkman, R.M.; Mendling, J.; Weidlich, M.

    2010-01-01

    Various BPMN modeling tools are available and it is close to impossible to understand their functional differences without simply trying them out. This paper presents an evaluation framework and presents the outcomes of its application to a set of five BPMN modeling tools. We report on various

  20. Evaluation of constitutive models for crushed salt

    International Nuclear Information System (INIS)

    Callahan, G.D.; Loken, M.C.; Hurtado, L.D.; Hansen, F.D.

    1996-01-01

    Three constitutive models are recommended as candidates for describing the deformation of crushed salt. These models are generalized to three-dimensional states of stress to include the effects of mean and deviatoric stress and modified to include effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant (WIPP) and southeastern New Mexico salt is used to determine material parameters for the models. To evaluate the capability of the models, parameter values obtained from fitting the complete database are used to predict the individual tests. Finite element calculations of a WIPP shaft with emplaced crushed salt demonstrate the model predictions

  1. Modeling for Green Supply Chain Evaluation

    Directory of Open Access Journals (Sweden)

    Elham Falatoonitoosi

    2013-01-01

    Full Text Available Green supply chain management (GSCM has become a practical approach to develop environmental performance. Under strict regulations and stakeholder pressures, enterprises need to enhance and improve GSCM practices, which are influenced by both traditional and green factors. This study developed a causal evaluation model to guide selection of qualified suppliers by prioritizing various criteria and mapping causal relationships to find effective criteria to improve green supply chain. The aim of the case study was to model and examine the influential and important main GSCM practices, namely, green logistics, organizational performance, green organizational activities, environmental protection, and green supplier evaluation. In the case study, decision-making trial and evaluation laboratory technique is applied to test the developed model. The result of the case study shows only “green supplier evaluation” and “green organizational activities” criteria of the model are in the cause group and the other criteria are in the effect group.

  2. Conceptual modelling of human resource evaluation process

    Directory of Open Access Journals (Sweden)

    Negoiţă Doina Olivia

    2017-01-01

    Full Text Available Taking into account the highly diverse tasks which employees have to fulfil due to complex requirements of nowadays consumers, the human resource within an enterprise has become a strategic element for developing and exploiting products which meet the market expectations. Therefore, organizations encounter difficulties when approaching the human resource evaluation process. Hence, the aim of the current paper is to design a conceptual model of the aforementioned process, which allows the enterprises to develop a specific methodology. In order to design the conceptual model, Business Process Modelling instruments were employed - Adonis Community Edition Business Process Management Toolkit using the ADONIS BPMS Notation. The conceptual model was developed based on an in-depth secondary research regarding the human resource evaluation process. The proposed conceptual model represents a generic workflow (sequential and/ or simultaneously activities, which can be extended considering the enterprise’s needs regarding their requirements when conducting a human resource evaluation process. Enterprises can benefit from using software instruments for business process modelling as they enable process analysis and evaluation (predefined / specific queries and also model optimization (simulations.

  3. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  4. Evaluating Extensions to Coherent Mortality Forecasting Models

    Directory of Open Access Journals (Sweden)

    Syazreen Shair

    2017-03-01

    Full Text Available Coherent models were developed recently to forecast the mortality of two or more sub-populations simultaneously and to ensure long-term non-divergent mortality forecasts of sub-populations. This paper evaluates the forecast accuracy of two recently-published coherent mortality models, the Poisson common factor and the product-ratio functional models. These models are compared to each other and the corresponding independent models, as well as the original Lee–Carter model. All models are applied to age-gender-specific mortality data for Australia and Malaysia and age-gender-ethnicity-specific data for Malaysia. The out-of-sample forecast error of log death rates, male-to-female death rate ratios and life expectancy at birth from each model are compared and examined across groups. The results show that, in terms of overall accuracy, the forecasts of both coherent models are consistently more accurate than those of the independent models for Australia and for Malaysia, but the relative performance differs by forecast horizon. Although the product-ratio functional model outperforms the Poisson common factor model for Australia, the Poisson common factor is more accurate for Malaysia. For the ethnic groups application, ethnic-coherence gives better results than gender-coherence. The results provide evidence that coherent models are preferable to independent models for forecasting sub-populations’ mortality.

  5. Multi-criteria evaluation of hydrological models

    Science.gov (United States)

    Rakovec, Oldrich; Clark, Martyn; Weerts, Albrecht; Hill, Mary; Teuling, Ryan; Uijlenhoet, Remko

    2013-04-01

    Over the last years, there is a tendency in the hydrological community to move from the simple conceptual models towards more complex, physically/process-based hydrological models. This is because conceptual models often fail to simulate the dynamics of the observations. However, there is little agreement on how much complexity needs to be considered within the complex process-based models. One way to proceed to is to improve understanding of what is important and unimportant in the models considered. The aim of this ongoing study is to evaluate structural model adequacy using alternative conceptual and process-based models of hydrological systems, with an emphasis on understanding how model complexity relates to observed hydrological processes. Some of the models require considerable execution time and the computationally frugal sensitivity analysis, model calibration and uncertainty quantification methods are well-suited to providing important insights for models with lengthy execution times. The current experiment evaluates two version of the Framework for Understanding Structural Errors (FUSE), which both enable running model inter-comparison experiments. One supports computationally efficient conceptual models, and the second supports more-process-based models that tend to have longer execution times. The conceptual FUSE combines components of 4 existing conceptual hydrological models. The process-based framework consists of different forms of Richard's equations, numerical solutions, groundwater parameterizations and hydraulic conductivity distribution. The hydrological analysis of the model processes has evolved from focusing only on simulated runoff (final model output), to also including other criteria such as soil moisture and groundwater levels. Parameter importance and associated structural importance are evaluated using different types of sensitivity analyses techniques, making use of both robust global methods (e.g. Sobol') as well as several

  6. Saphire models and software for ASP evaluations

    International Nuclear Information System (INIS)

    Sattison, M.B.

    1997-01-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission's (NRC's) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented

  7. Modeling Energy and Development : An Evaluation of Models and Concepts

    NARCIS (Netherlands)

    Ruijven, Bas van; Urban, Frauke; Benders, René M.J.; Moll, Henri C.; Sluijs, Jeroen P. van der; Vries, Bert de; Vuuren, Detlef P. van

    2008-01-01

    Most global energy models are developed by institutes from developed countries focusing primarily oil issues that are important in industrialized countries. Evaluation of the results for Asia of the IPCC/SRES models shows that broad concepts of energy and development. the energy ladder and the

  8. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  9. Development of Mechanical Sealing and Laser Welding Technology to Instrument Thermocouple for Nuclear Fuel Test Rod

    International Nuclear Information System (INIS)

    Joung, Chang-Young; Ahn, Sung-Ho; Hong, Jin-Tae; Kim, Ka-Hye; Huh, Sung-Ho

    2015-01-01

    Zircaloy-4 of the nuclear fuel test rod, AISI 316L of the mechanical sealing parts, and the MI (mineral insulated) cable at a thermocouple instrumentation are hetero-metals, and are difficult to weld to dissimilar materials. Therefore, a mechanical sealing method to instrument the thermocouple should be conducted using two kinds of sealing process as follows: One is a mechanical sealing process using Swagelok, which is composed of sealing components that consists of an end-cap, a seal tube, a compression ring and a Swagelok nut. The other is a laser welding process used to join a seal tube, and an MI cable, which are made of the same material. The mechanical sealing process should be sealed up with the mechanical contact compressed by the strength forced between a seal tube and an end-cap, and the laser welding process should be conducted to have no defects on the sealing area between a seal tube and an MI cable. Therefore, the mechanical sealing and laser welding techniques need to be developed to accurately measure the centerline temperature of the nuclear fuel test rod in an experimental reactor. The mechanical sealing and laser welding tests were conducted to develop the thermocouple instrumentation techniques for the nuclear fuel test rod. The optimum torque value of a Swagelok nut to seal the mechanical sealing part between the end-cap and seal tube was established through various torque tests using a torque wrench. The optimum laser welding conditions to seal the welding part between a seal tube and an MI cable were obtained through various welding tests using a laser welding system

  10. Development of Mechanical Sealing and Laser Welding Technology to Instrument Thermocouple for Nuclear Fuel Test Rod

    Energy Technology Data Exchange (ETDEWEB)

    Joung, Chang-Young; Ahn, Sung-Ho; Hong, Jin-Tae; Kim, Ka-Hye; Huh, Sung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    Zircaloy-4 of the nuclear fuel test rod, AISI 316L of the mechanical sealing parts, and the MI (mineral insulated) cable at a thermocouple instrumentation are hetero-metals, and are difficult to weld to dissimilar materials. Therefore, a mechanical sealing method to instrument the thermocouple should be conducted using two kinds of sealing process as follows: One is a mechanical sealing process using Swagelok, which is composed of sealing components that consists of an end-cap, a seal tube, a compression ring and a Swagelok nut. The other is a laser welding process used to join a seal tube, and an MI cable, which are made of the same material. The mechanical sealing process should be sealed up with the mechanical contact compressed by the strength forced between a seal tube and an end-cap, and the laser welding process should be conducted to have no defects on the sealing area between a seal tube and an MI cable. Therefore, the mechanical sealing and laser welding techniques need to be developed to accurately measure the centerline temperature of the nuclear fuel test rod in an experimental reactor. The mechanical sealing and laser welding tests were conducted to develop the thermocouple instrumentation techniques for the nuclear fuel test rod. The optimum torque value of a Swagelok nut to seal the mechanical sealing part between the end-cap and seal tube was established through various torque tests using a torque wrench. The optimum laser welding conditions to seal the welding part between a seal tube and an MI cable were obtained through various welding tests using a laser welding system.

  11. CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE

    Directory of Open Access Journals (Sweden)

    José Luis Bernal Agudo

    2015-06-01

    Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.

  12. Study on team evaluation. Team process model for team evaluation

    International Nuclear Information System (INIS)

    Sasou Kunihide; Ebisu, Mitsuhiro; Hirose, Ayako

    2004-01-01

    Several studies have been done to evaluate or improve team performance in nuclear and aviation industries. Crew resource management is the typical example. In addition, team evaluation recently gathers interests in other teams of lawyers, medical staff, accountants, psychiatrics, executive, etc. However, the most evaluation methods focus on the results of team behavior that can be observed through training or actual business situations. What is expected team is not only resolving problems but also training younger members being destined to lead the next generation. Therefore, the authors set the final goal of this study establishing a series of methods to evaluate and improve teams inclusively such as decision making, motivation, staffing, etc. As the first step, this study develops team process model describing viewpoints for the evaluation. The team process is defined as some kinds of power that activate or inactivate competency of individuals that is the components of team's competency. To find the team process, the authors discussed the merits of team behavior with the experienced training instructors and shift supervisors of nuclear/thermal power plants. The discussion finds four team merits and many components to realize those team merits. Classifying those components into eight groups of team processes such as 'Orientation', 'Decision Making', 'Power and Responsibility', 'Workload Management', 'Professional Trust', 'Motivation', 'Training' and 'staffing', the authors propose Team Process Model with two to four sub processes in each team process. In the future, the authors will develop methods to evaluate some of the team processes for nuclear/thermal power plant operation teams. (author)

  13. Improvement in the technology of thermocouples for the detection of high temperatures with a view to using them in irradiation safety tests in reactor

    International Nuclear Information System (INIS)

    Schley, R.; Liermann, J.; Aujollet, J.M.; Wilkins, S.C.

    1979-01-01

    The safety tests carried out under the CABRI and PHEBUS programmes have made it possible to improve the technology of W/Re thermocouples and their reliability in particularly hard operating conditions. An element of response is provided to the problem of W/Re thermocouple drift under neutron flux by defining the new thermocouple Mo 5% Nb/Nb 10% Mo which, because of the low capture cross section of thermoelectric elements, gives one reason to hope for a less significant drift of these thermocouples under neutron flux than that found with W/Re thermocouples. Finally, determining the surface temperature of fuel element cladding with the Mo/Zircaloy thermocouple may prove worthwhile providing the temperatures do not exceed 1300 0 C and the electric insulator is aluminium oxide which up to 1300 0 C does not appear to react with thermoelectric wires [fr

  14. Evaluating Performances of Traffic Noise Models | Oyedepo ...

    African Journals Online (AJOL)

    Traffic noise in decibel dB(A) were measured at six locations using 407780A Integrating Sound Level Meter, while spot speed and traffic volume were collected with cine-camera. The predicted sound exposure level (SEL) was evaluated using Burgess, British and FWHA model. The average noise level obtained are 77.64 ...

  15. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  16. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  17. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  18. Credit Risk Evaluation : Modeling - Analysis - Management

    OpenAIRE

    Wehrspohn, Uwe

    2002-01-01

    An analysis and further development of the building blocks of modern credit risk management: -Definitions of default -Estimation of default probabilities -Exposures -Recovery Rates -Pricing -Concepts of portfolio dependence -Time horizons for risk calculations -Quantification of portfolio risk -Estimation of risk measures -Portfolio analysis and portfolio improvement -Evaluation and comparison of credit risk models -Analytic portfolio loss distributions The thesis contributes to the evaluatio...

  19. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  20. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  1. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  2. Experience with W3Re/W25Re thermocouples in fuel pins of NS Otto Hahn's two cores

    International Nuclear Information System (INIS)

    Kolb, M.

    1976-01-01

    The paper first deals with the installation of 18 and 9 high-temperature sheathed thermocouples in fuel rods of the cores FDR-1 and FDR-2, respectively. The measured fuel rod centerline temperatures could be related to the local linear rod power at any given time by means of the densities of fission products with different half-lives obtained from fuel rod γ-scans. The fuel temperatures show then already an increase with the burn-up of the FDR-1 which becomes steeper when taking into account the decrease of the EMF measured at irradiated thermocouples taken from the fuel rods. Finally, the determination of effective thermocouple time constants and of fuel rod heat transfer time constants is demonstrated by utilizing the reactor noise to measure the transfer function between neutron flux and fuel temperature signal. (orig.) [de

  3. Apparatus for spot welding sheathed thermocouples to the inside of small-diameter tubes at precise locations

    International Nuclear Information System (INIS)

    Baucum, W.E.; Dial, R.E.

    1976-01-01

    Equipment and procedures used to spot weld tantalum- or stainless-steel-sheathed thermocouples to the inside diameter of Zircaloy tubing to meet the requirements of the Multirod Burst Test (MRBT) Program at ORNL are described. Spot welding and oxide cleaning tools were fabricated to remove the oxide coating on the Zircaloy tubing at local areas and spot weld four thermocouples separated circumferentially by 90 0 at any axial distribution desired. It was found necessary to apply a nickel coating to stainless-steel-sheathed thermocouples to obtain acceptable welds. The material and shape of the inner electrode and resistance between inner and outer electrodes were found to be critical parameters in obtaining acceptable welds

  4. Model description and evaluation of model performance: DOSDIM model

    International Nuclear Information System (INIS)

    Lewyckyj, N.; Zeevaert, T.

    1996-01-01

    DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs

  5. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  6. CTBT integrated verification system evaluation model supplement

    Energy Technology Data Exchange (ETDEWEB)

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  7. CTBT integrated verification system evaluation model supplement

    International Nuclear Information System (INIS)

    EDENBURN, MICHAEL W.; BUNTING, MARCUS; PAYNE, ARTHUR C. JR.; TROST, LAWRENCE C.

    2000-01-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0

  8. Evaluating spatial patterns in hydrological modelling

    DEFF Research Database (Denmark)

    Koch, Julian

    the contiguous United Sates (10^6 km2). To this end, the thesis at hand applies a set of spatial performance metrics on various hydrological variables, namely land-surface-temperature (LST), evapotranspiration (ET) and soil moisture. The inspiration for the applied metrics is found in related fields...... is not fully exploited by current modelling frameworks due to the lack of suitable spatial performance metrics. Furthermore, the traditional model evaluation using discharge is found unsuitable to lay confidence on the predicted catchment inherent spatial variability of hydrological processes in a fully...

  9. Transport properties site descriptive model. Guidelines for evaluation and modelling

    International Nuclear Information System (INIS)

    Berglund, Sten; Selroos, Jan-Olof

    2004-04-01

    This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive modelling of

  10. Evaluation of CNN as anthropomorphic model observer

    Science.gov (United States)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  11. Study for on-line system to identify inadvertent control rod drops in PWR reactors using ex-core detector and thermocouple measures

    International Nuclear Information System (INIS)

    Souza, Thiago J.; Medeiros, Jose A.C.C.; Goncalves, Alessandro C.

    2015-01-01

    Accidental control rod drops event in PWR reactors leads to an unsafe operating condition. It is important to quickly identify the rod to minimize undesirable effects in such a scenario. In this event, there is a distortion in the power distribution and temperature in the reactor core. The goal of this study is to develop an on-line model to identify the inadvertent control rod dropped in PWR reactor. The proposed model is based on physical correlations and pattern recognition of ex-core detector responses and thermocouples measures. The results of the study demonstrated the feasibility of an on-line system, contributing to safer operation conditions and preventing undesirable effects, as its shutdown. (author)

  12. Study for on-line system to identify inadvertent control rod drops in PWR reactors using ex-core detector and thermocouple measures

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Thiago J.; Medeiros, Jose A.C.C.; Goncalves, Alessandro C., E-mail: tsouza@nuclear.ufrj.br, E-mail: canedo@lmp.ufrj.br, E-mail: alessandro@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    Accidental control rod drops event in PWR reactors leads to an unsafe operating condition. It is important to quickly identify the rod to minimize undesirable effects in such a scenario. In this event, there is a distortion in the power distribution and temperature in the reactor core. The goal of this study is to develop an on-line model to identify the inadvertent control rod dropped in PWR reactor. The proposed model is based on physical correlations and pattern recognition of ex-core detector responses and thermocouples measures. The results of the study demonstrated the feasibility of an on-line system, contributing to safer operation conditions and preventing undesirable effects, as its shutdown. (author)

  13. Small Animal Models for Evaluating Filovirus Countermeasures.

    Science.gov (United States)

    Banadyga, Logan; Wong, Gary; Qiu, Xiangguo

    2018-05-11

    The development of novel therapeutics and vaccines to treat or prevent disease caused by filoviruses, such as Ebola and Marburg viruses, depends on the availability of animal models that faithfully recapitulate clinical hallmarks of disease as it is observed in humans. In particular, small animal models (such as mice and guinea pigs) are historically and frequently used for the primary evaluation of antiviral countermeasures, prior to testing in nonhuman primates, which represent the gold-standard filovirus animal model. In the past several years, however, the filovirus field has witnessed the continued refinement of the mouse and guinea pig models of disease, as well as the introduction of the hamster and ferret models. We now have small animal models for most human-pathogenic filoviruses, many of which are susceptible to wild type virus and demonstrate key features of disease, including robust virus replication, coagulopathy, and immune system dysfunction. Although none of these small animal model systems perfectly recapitulates Ebola virus disease or Marburg virus disease on its own, collectively they offer a nearly complete set of tools in which to carry out the preclinical development of novel antiviral drugs.

  14. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  16. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  17. Simulations, evaluations and models. Vol. 1

    International Nuclear Information System (INIS)

    Brehmer, B.; Leplat, J.

    1992-01-01

    Papers presented at the Fourth MOHAWC (Models of Human Activities in Work Context) workshop. The general theme was simulations, evaluations and models. The emphasis was on time in relation to the modelling of human activities in modern, high tech. work. Such work often requires people to control dynamic systems, and the behaviour and misbehaviour of these systems in time is a principle focus of work in, for example, a modern process plant. The papers report on microworlds and on their innovative uses, both in the form of experiments and in the form of a new form of use, that of testing a program which performs diagnostic reasoning. They present new aspects on the problem of time in process control, showing the importance of considering the time scales of dynamic tasks, both in individual decision making and in distributed decision making, and in providing new formalisms, both for the representation of time and for reasoning involving time in diagnosis. (AB)

  18. An evaluation of Tsyganenko magnetic field model

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1991-01-01

    A long-standing goal of magnetospheric physics has been to produce a model of the Earth's magnetic field that can accurately predict the field vector at all locations within the magnetosphere for all dipole tilt angles and for various solar wind or magnetic activity conditions. A number of models make such predictions, but some only for limited spatial regions, some only for zero tilt angle, and some only for arbitrary conditions. No models depend explicitly on solar wind conditions. A data set of more than 22,000 vector averages of the magnetosphere magnetic field over 0.5 R E regions is used to evaluate Tsyganenko's 1982 and 1987 magnetospheric magnetic field models. The magnetic field predicted by the model in various regions is compared to observations to find systematic discrepancies which future models might address. While agreement is generally good, discrepancies are noted which include: (1) a lack of adequate field line stretching in the tail and ring current regions; (2) an inability to predict weak enough fields in the polar cusps; and (3) a deficiency of Kp as a predictor of the field configuration

  19. A methodology for spectral wave model evaluation

    Science.gov (United States)

    Siqueira, S. A.; Edwards, K. L.; Rogers, W. E.

    2017-12-01

    Model evaluation is accomplished by comparing bulk parameters (e.g., significant wave height, energy period, and mean square slope (MSS)) calculated from the model energy spectra with those calculated from buoy energy spectra. Quality control of the observed data and choice of the frequency range from which the bulk parameters are calculated are critical steps in ensuring the validity of the model-data comparison. The compared frequency range of each observation and the analogous model output must be identical, and the optimal frequency range depends in part on the reliability of the observed spectra. National Data Buoy Center 3-m discus buoy spectra are unreliable above 0.3 Hz due to a non-optimal buoy response function correction. As such, the upper end of the spectrum should not be included when comparing a model to these data. Bioufouling of Waverider buoys must be detected, as it can harm the hydrodynamic response of the buoy at high frequencies, thereby rendering the upper part of the spectrum unsuitable for comparison. An important consideration is that the intentional exclusion of high frequency energy from a validation due to data quality concerns (above) can have major implications for validation exercises, especially for parameters such as the third and fourth moments of the spectrum (related to Stokes drift and MSS, respectively); final conclusions can be strongly altered. We demonstrate this by comparing outcomes with and without the exclusion, in a case where a Waverider buoy is believed to be free of biofouling. Determination of the appropriate frequency range is not limited to the observed spectra. Model evaluation involves considering whether all relevant frequencies are included. Guidance to make this decision is based on analysis of observed spectra. Two model frequency lower limits were considered. Energy in the observed spectrum below the model lower limit was calculated for each. For locations where long swell is a component of the wave

  20. Use of a thermocouple-datalogger system to evaluate overstory mortality

    Science.gov (United States)

    Lucy Brudnak; Thomas A. Waldrop; Ross J. Phillips

    2010-01-01

    In the past, it was difficult to accurately measure dynamic fire behavior during prescribed burns. Peak temperature, flaming duration, and total heat output may be directly related to first-order fire effects such as fuel consumption and vegetative mortality; however, little is known about which of these variables is most closely associated with, and therefore the best...

  1. Intuitionistic fuzzy (IF) evaluations of multidimensional model

    International Nuclear Information System (INIS)

    Valova, I.

    2012-01-01

    There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model-MD of data is presentation of data in a form of cube (referred also as info-cube or hypercube) with data or in form of 'star' type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or 'concentrated', therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed business event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The basic methods for presentation of such data should be selected. The approaches of data processing and respective calculations are connected with different options for data representation. The use of intuitionistic fuzzy evaluations (IFE) provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data. (authors)

  2. W-1 Sodium Loop Safety Facility experiment centerline fuel thermocouple performance

    International Nuclear Information System (INIS)

    Meyers, S.C.; Henderson, J.M.

    1980-05-01

    The W-1 Sodium Loop Safety Facility (SLSF) experiment is the fifth in a series of experiments sponsored by the Department of Energy (DOE) as part of the National Fast Breeder Reactor (FBR) Safety Assurance Program. The experiments are being conducted under the direction of Argonne National Laboratory (ANL) and Hanford Engineering Development Laboratory (HEDL). The irradiation phase of the W-1 SLSF experiment was conducted between May 27 and July 20, 1979, and terminated with incipient fuel pin cladding failure during the final boiling transient. Experimental hardware and facility performed as designed, allowing completion of all planned tests and test objectives. This paper focuses on high temperature in-fuel thermocouples and discusses their development, fabrication, and performance in the W-1 experiment

  3. Boiling detection using signals of self-powered neutron detectors and thermocouples

    International Nuclear Information System (INIS)

    Kozma, R.

    1989-01-01

    A specially-equipped simulated fuel assembly has been placed into the core of the 2 MW research reactor of the IRI, Delft. In this paper the recent results concerning the detection of coolant boiling in the simulated fuel assembly are introduced. Applying the theory of boiling temperature noise, different stages of boiling, i.e. one-phase flow, subcooled boiling, volume boiling, were identified in the measurements using the low-frequency noise components of the thermocouple signals. It has been ascertained that neutron noise spectra remained unchanged when subcooled boiling appeared, and that they changed reasonably only when developed volume boiling took place in the channels. At certain neutron detector positions neutron spectra did not vary at all, although developed volume boiling occurred at a distance of 3-4 cm from these neutron detectors. This phenomenon was applied in studying the field-of-view of neutron detectors

  4. Preparation and Thermoelectric Characteristics of ITO/PtRh:PtRh Thin Film Thermocouple.

    Science.gov (United States)

    Zhao, Xiaohui; Wang, Hongmin; Zhao, Zixiang; Zhang, Wanli; Jiang, Hongchuan

    2017-12-15

    Thin film thermocouples (TFTCs) can provide more precise in situ temperature measurement for aerospace propulsion systems without disturbance of gas flow and surface temperature distribution of the hot components. ITO/PtRh:PtRh TFTC with multilayer structure was deposited on alumina ceramic substrate by magnetron sputtering. After annealing, the TFTC was statically calibrated for multiple cycles with temperature up to 1000 °C. The TFTC with excellent stability and repeatability was realized for the negligible variation of EMF in different calibration cycles. It is believed that owing to oxygen diffusion barriers by the oxidation of top PtRh layer and Schottky barriers formed at the grain boundaries of ITO, the variation of the carrier concentration of ITO film is minimized. Meanwhile, the life time of TFTC is more than 30 h in harsh environment. This makes ITO/PtRh:PtRh TFTC a promising candidate for precise surface temperature measurement of hot components of aeroengines.

  5. Energy deposition measurements in fast reactor safety experiments with fission thermocouple detectors

    International Nuclear Information System (INIS)

    Wright, S.A.; Scott, H.L.

    1979-01-01

    The investigation of phenomena occurring in in-pile fast reactor safety experiments requires an accurate measurement of the time dependent energy depositions within the fissile material. At Sandia Laboratories thin-film fission thermocouples are being developed for this purpose. These detectors have high temperature capabilities (400 to 500 0 C), are sodium compatible, and have milli-second time response. A significant advantage of these detectors for use as energy deposition monitors is that they produce an output voltage which is directly dependent on the temperature of a small chip of fissile material within the detectors. However, heat losses within the detector make it necessary to correct the response of the detector to determine the energy deposition. A method of correcting the detector response which uses an inverse convolution procedure has been developed and successfully tested with experimental data obtained in the Sandia Pulse Reactor (SPR-II) and in the Annular Core Research Reactor

  6. Tile Surface Thermocouple Measurement Challenges from the Orbiter Boundary Layer Transition Flight Experiment

    Science.gov (United States)

    Campbell, Charles H.; Berger, Karen; Anderson, Brian

    2012-01-01

    Hypersonic entry flight testing motivated by efforts seeking to characterize boundary layer transition on the Space Shuttle Orbiters have identified challenges in our ability to acquire high quality quantitative surface temperature measurements versus time. Five missions near the end of the Space Shuttle Program implemented a tile surface protuberance as a boundary layer trip together with tile surface thermocouples to capture temperature measurements during entry. Similar engineering implementations of these measurements on Discovery and Endeavor demonstrated unexpected measurement voltage response during the high heating portion of the entry trajectory. An assessment has been performed to characterize possible causes of the issues experienced during STS-119, STS-128, STS-131, STS-133 and STS-134 as well as similar issues encountered during other orbiter entries.

  7. Operating Temperatures of a Sodium-Cooled Exhaust Valve as Measured by a Thermocouple

    Science.gov (United States)

    Sanders, J. C.; Wilsted, H. D.; Mulcahy, B. A.

    1943-01-01

    A thermocouple was installed in the crown of a sodium-cooled exhaust valve. The valve was then tested in an air-cooled engine cylinder and valve temperatures under various engine operating conditions were determined. A temperature of 1337 F was observed at a fuel-air ratio of 0.064, a brake mean effective pressure of 179 pounds per square inch, and an engine speed of 2000 rpm. Fuel-air ratio was found to have a large influence on valve temperature, but cooling-air pressure and variation in spark advance had little effect. An increase in engine power by change of speed or mean effective pressure increased the valve temperature. It was found that the temperature of the rear spark-plug bushing was not a satisfactory indication of the temperature of the exhaust valve.

  8. Low-noise audio amplifiers and preamplifier for use with intrinsic thermocouples

    International Nuclear Information System (INIS)

    Langner, G.C.; Sachs, R.D.; Stewart, F.L.

    1979-03-01

    Two simple, low-noise audio amplifiers and one low-noise preamplifier for use with intrinsic thermocouples were designed, built, and tested. The amplifiers and the preamplifier have different front end designs. One amplifier uses ultralow-noise operational amplifiers; the other amplifier uses a hybrid component. The preamplifier uses ultralow-noise discrete components. The amplifiers' equivalent noise inputs, at maximum gain, are 4.09 nV and 50 nV; the preamplifier's input is 4.05 μV. Their bandwidths are 15 600 Hz, 550 Hz, and 174 kHz, respectively. the amplifiers' equivalent noise inputs were measured from approx. 0 to 100 Hz, whereas the preamplifier's equivalent noise input was measured from approx. 0 to 174 kHz

  9. SMORN-1: thermoelectrically generated noise in sheathed thermocouples and in other low level instrumentation cables

    International Nuclear Information System (INIS)

    Mathieu, F.; Meier, R.; Soenen, M.; Delcon, M.; Nysten, C.

    Starting from the fact that thermoelectric emfs of thermocouples are generated in the thermal gradients and not at the hot junction, it is shown how thermoelectric heterogeneity in conjunction with natural and forced convection phenomena gives rise to unwanted noise called: ''thermoelectric noise'' in the technological sense. A distinction is made between four different types of noise--i.e. uncorrelated noise, correlated noise, spectral noise and thermoelectric noise in the physical sense--each of which has its own characteristics. The experimental results presented reveal that noise amplitudes may be quite embarrassing when dealing with problems of quantitative signal fluctuation analysis. It is however emphasized that thermoelectric noise may also convey useful information which, without noise, might be lost

  10. Boiling point measurement of a small amount of brake fluid by thermocouple and its application.

    Science.gov (United States)

    Mogami, Kazunari

    2002-09-01

    This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.

  11. Use of an operational model evaluation system for model intercomparison

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K. T., LLNL

    1998-03-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.

  12. Co-C and Pd-C Eutectic Fixed Points for Radiation Thermometry and Thermocouple Thermometry

    Science.gov (United States)

    Wang, L.

    2017-12-01

    Two Co-C and Pd-C eutectic fixed point cells for both radiation thermometry and thermocouple thermometry were constructed at NMC. This paper describes details of the cell design, materials used, and fabrication of the cells. The melting curves of the Co-C and Pd-C cells were measured with a reference radiation thermometer realized in both a single-zone furnace and a three-zone furnace in order to investigate furnace effect. The transition temperatures in terms of ITS-90 were determined to be 1324.18 {°}C and 1491.61 {°}C with the corresponding combined standard uncertainty of 0.44 {°}C and 0.31 {°}C for Co-C and Pd-C, respectively, taking into account of the differences of two different types of furnaces used. The determined ITS-90 temperatures are also compared with that of INRIM cells obtained using the same reference radiation thermometer and the same furnaces with the same settings during a previous bilateral comparison exercise (Battuello et al. in Int J Thermophys 35:535-546, 2014). The agreements are within k=1 uncertainty for Co-C cell and k = 2 uncertainty for Pd-C cell. Shapes of the plateaus of NMC cells and INRIM cells are compared too and furnace effects are analyzed as well. The melting curves of the Co-C and Pd-C cells realized in the single-zone furnace are also measured by a Pt/Pd thermocouple, and the preliminary results are presented as well.

  13. Investigating Microbial Habitats in Hydrothermal Chimneys using Ti-Thermocouple Arrays: Microbial Diversity

    Science.gov (United States)

    Pagé, A.; Tivey, M. K.; Stakes, D. S.; Bradley, A. M.; Seewald, J. S.; Wheat, C. G.; Reysenbach, A.

    2004-12-01

    In order to examine the changes that occur in the microbial community composition as a deep-sea hydrothermal vent chimney develops, we deployed Ti-thermocouple arrays over high temperature vents at two active sites of the Guaymas Basin Southern Trough. Chimney material that precipitated around the arrays was recovered after 4 and 72 days. Chimney material that precipitated prior to deployment of the arrays was also recovered at one of the sites (Busted Shroom). Culture-independent analysis based on the small subunit rRNA sequence (cloning and DGGE) was used to determine the microbial diversity associated with subsamples of each chimney. The original Busted Shroom chimney (BSO) was dominated by members of the Crenarchaeota Marine Group I, a group of cosmopolitan marine Archaea, ɛ -Proteobacteria, and γ -Proteobacteria, two divisions of Bacteria that are common to deep-sea vents. The 4 days old Busted Shroom chimney (BSD1) was dominated by members of the Methanocaldococcaceae, hyperthermophilic methanogens, and the 72 days old chimney (BSD2) by members of the Methanosarcinaceae, mesophilic and thermophilic methanogens. At the second site, Toadstool, the 72 days old chimney material that had precipitated around the array (TS) revealed the dominance of sequences from uncultured marine Archaea, the DHVE group I and II, and from the ɛ -Proteobacteria. Additionally, sequences belonging to the Methanocaldococcaceae and Desulfurococcaceae were recovered next to thermocouples that were at temperatures of 109° C (at Busted Shroom) and 116° C (at Toadstool), respectively. These temperatures are higher than the upper limit for growth of cultured representatives from each family.

  14. Evaluation of models of waste glass durability

    International Nuclear Information System (INIS)

    Ellison, A.

    1995-01-01

    The main variable under the control of the waste glass producer is the composition of the glass; thus a need exists to establish functional relationships between the composition of a waste glass and measures of processability, product consistency, and durability. Many years of research show that the structure and properties of a glass depend on its composition, so it seems reasonable to assume that there also is relationship between the composition of a waste glass and its resistance to attack by an aqueous solution. Several models have been developed to describe this dependence, and an evaluation their predictive capabilities is the subject of this paper. The objective is to determine whether any of these models describe the ''correct'' functional relationship between composition and corrosion rate. A more thorough treatment of the relationships between glass composition and durability has been presented elsewhere, and the reader is encouraged to consult it for a more detailed discussion. The models examined in this study are the free energy of hydration model, developed at the Savannah River Laboratory, the structural bond strength model, developed at the Vitreous State Laboratory at the Catholic University of America, and the Composition Variation Study, developed at Pacific Northwest Laboratory

  15. Evaluation of onset of nucleate boiling models

    Energy Technology Data Exchange (ETDEWEB)

    Huang, LiDong [Heat Transfer Research, Inc., College Station, TX (United States)], e-mail: lh@htri.net

    2009-07-01

    This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)

  16. Evaluation of onset of nucleate boiling models

    International Nuclear Information System (INIS)

    Huang, LiDong

    2009-01-01

    This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)

  17. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  18. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  19. Evaluating Translational Research: A Process Marker Model

    Science.gov (United States)

    Trochim, William; Kane, Cathleen; Graham, Mark J.; Pincus, Harold A.

    2011-01-01

    Abstract Objective: We examine the concept of translational research from the perspective of evaluators charged with assessing translational efforts. One of the major tasks for evaluators involved in translational research is to help assess efforts that aim to reduce the time it takes to move research to practice and health impacts. Another is to assess efforts that are intended to increase the rate and volume of translation. Methods: We offer an alternative to the dominant contemporary tendency to define translational research in terms of a series of discrete “phases.”Results: We contend that this phased approach has been confusing and that it is insufficient as a basis for evaluation. Instead, we argue for the identification of key operational and measurable markers along a generalized process pathway from research to practice. Conclusions: This model provides a foundation for the evaluation of interventions designed to improve translational research and the integration of these findings into a field of translational studies. Clin Trans Sci 2011; Volume 4: 153–162 PMID:21707944

  20. REPFLO model evaluation, physical and numerical consistency

    International Nuclear Information System (INIS)

    Wilson, R.N.; Holland, D.H.

    1978-11-01

    This report contains a description of some suggested changes and an evaluation of the REPFLO computer code, which models ground-water flow and nuclear-waste migration in and about a nuclear-waste repository. The discussion contained in the main body of the report is supplemented by a flow chart, presented in the Appendix of this report. The suggested changes are of four kinds: (1) technical changes to make the code compatible with a wider variety of digital computer systems; (2) changes to fill gaps in the computer code, due to missing proprietary subroutines; (3) changes to (a) correct programming errors, (b) correct logical flaws, and (c) remove unnecessary complexity; and (4) changes in the computer code logical structure to make REPFLO a more viable model from the physical point of view

  1. CTBT Integrated Verification System Evaluation Model

    Energy Technology Data Exchange (ETDEWEB)

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  2. An evaluation framework for participatory modelling

    Science.gov (United States)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  3. Evaluation of Student's Environment by DEA Models

    Directory of Open Access Journals (Sweden)

    F. Moradi

    2016-11-01

    Full Text Available The important question here is, is there real evaluation in educational advance? In other words, if a student has been successful in mathematics or has been unsuccessful in mathematics, is it possible to find the reasons behind his advance or, is it possible to find the reasons behind his advance or weakness? If we want to respond to this significant question, it should be said that factors of educational advance must be divided into 5 main groups. 1-family, 2-teacher, 3- students 4-school and 5-manager of 3 schools It can then be said that a student's score does not just depend on a factor that people have imaged From this, it can be concluded that by using the DEA and SBM models, each student's efficiency must be researched and the factors of the student's strengths and weaknesses must be analyzed.

  4. RTMOD: Real-Time MODel evaluation

    International Nuclear Information System (INIS)

    Graziani, G; Galmarini, S.; Mikkelsen, T.

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  5. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  6. World Integrated Nuclear Evaluation System: Model documentation

    International Nuclear Information System (INIS)

    1991-12-01

    The World Integrated Nuclear Evaluation System (WINES) is an aggregate demand-based partial equilibrium model used by the Energy Information Administration (EIA) to project long-term domestic and international nuclear energy requirements. WINES follows a top-down approach in which economic growth rates, delivered energy demand growth rates, and electricity demand are projected successively to ultimately forecast total nuclear generation and nuclear capacity. WINES could be potentially used to produce forecasts for any country or region in the world. Presently, WINES is being used to generate long-term forecasts for the United States, and for all countries with commercial nuclear programs in the world, excluding countries located in centrally planned economic areas. Projections for the United States are developed for the period from 2010 through 2030, and for other countries for the period starting in 2000 or 2005 (depending on the country) through 2010. EIA uses a pipeline approach to project nuclear capacity for the period between 1990 and the starting year for which the WINES model is used. This approach involves a detailed accounting of existing nuclear generating units and units under construction, their capacities, their actual or estimated time of completion, and the estimated date of retirements. Further detail on this approach can be found in Appendix B of Commercial Nuclear Power 1991: Prospects for the United States and the World

  7. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  8. Evaluating to Solve Educational Problems: An Alternative Model.

    Science.gov (United States)

    Friedman, Myles I.; Anderson, Lorin W.

    1979-01-01

    A 19-step general evaluation model is described through its four stages: identifying problems, prescribing program solutions, evaluating the operation of the program, and evaluating the effectiveness of the model. The role of the evaluator in decision making is also explored. (RAO)

  9. A Model for Evaluating Student Clinical Psychomotor Skills.

    Science.gov (United States)

    And Others; Fiel, Nicholas J.

    1979-01-01

    A long-range plan to evaluate medical students' physical examination skills was undertaken at the Ingham Family Medical Clinic at Michigan State University. The development of the psychomotor skills evaluation model to evaluate the skill of blood pressure measurement, tests of the model's reliability, and the use of the model are described. (JMD)

  10. Comparison of two surface temperature measurement using thermocouples and infrared camera

    Directory of Open Access Journals (Sweden)

    Michalski Dariusz

    2017-01-01

    Full Text Available This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  11. Thermocouple and Infrared Sensor-Based Measurement of Temperature Distribution in Metal Cutting

    Directory of Open Access Journals (Sweden)

    Abdil Kus

    2015-01-01

    Full Text Available In metal cutting, the magnitude of the temperature at the tool-chip interface is a function of the cutting parameters. This temperature directly affects production; therefore, increased research on the role of cutting temperatures can lead to improved machining operations. In this study, tool temperature was estimated by simultaneous temperature measurement employing both a K-type thermocouple and an infrared radiation (IR pyrometer to measure the tool-chip interface temperature. Due to the complexity of the machining processes, the integration of different measuring techniques was necessary in order to obtain consistent temperature data. The thermal analysis results were compared via the ANSYS finite element method. Experiments were carried out in dry machining using workpiece material of AISI 4140 alloy steel that was heat treated by an induction process to a hardness of 50 HRC. A PVD TiAlN-TiN-coated WNVG 080404-IC907 carbide insert was used during the turning process. The results showed that with increasing cutting speed, feed rate and depth of cut, the tool temperature increased; the cutting speed was found to be the most effective parameter in assessing the temperature rise. The heat distribution of the cutting tool, tool-chip interface and workpiece provided effective and useful data for the optimization of selected cutting parameters during orthogonal machining.

  12. Thermocouple and infrared sensor-based measurement of temperature distribution in metal cutting.

    Science.gov (United States)

    Kus, Abdil; Isik, Yahya; Cakir, M Cemal; Coşkun, Salih; Özdemir, Kadir

    2015-01-12

    In metal cutting, the magnitude of the temperature at the tool-chip interface is a function of the cutting parameters. This temperature directly affects production; therefore, increased research on the role of cutting temperatures can lead to improved machining operations. In this study, tool temperature was estimated by simultaneous temperature measurement employing both a K-type thermocouple and an infrared radiation (IR) pyrometer to measure the tool-chip interface temperature. Due to the complexity of the machining processes, the integration of different measuring techniques was necessary in order to obtain consistent temperature data. The thermal analysis results were compared via the ANSYS finite element method. Experiments were carried out in dry machining using workpiece material of AISI 4140 alloy steel that was heat treated by an induction process to a hardness of 50 HRC. A PVD TiAlN-TiN-coated WNVG 080404-IC907 carbide insert was used during the turning process. The results showed that with increasing cutting speed, feed rate and depth of cut, the tool temperature increased; the cutting speed was found to be the most effective parameter in assessing the temperature rise. The heat distribution of the cutting tool, tool-chip interface and workpiece provided effective and useful data for the optimization of selected cutting parameters during orthogonal machining.

  13. Thermocouple and Infrared Sensor-Based Measurement of Temperature Distribution in Metal Cutting

    Science.gov (United States)

    Kus, Abdil; Isik, Yahya; Cakir, M. Cemal; Coşkun, Salih; Özdemir, Kadir

    2015-01-01

    In metal cutting, the magnitude of the temperature at the tool-chip interface is a function of the cutting parameters. This temperature directly affects production; therefore, increased research on the role of cutting temperatures can lead to improved machining operations. In this study, tool temperature was estimated by simultaneous temperature measurement employing both a K-type thermocouple and an infrared radiation (IR) pyrometer to measure the tool-chip interface temperature. Due to the complexity of the machining processes, the integration of different measuring techniques was necessary in order to obtain consistent temperature data. The thermal analysis results were compared via the ANSYS finite element method. Experiments were carried out in dry machining using workpiece material of AISI 4140 alloy steel that was heat treated by an induction process to a hardness of 50 HRC. A PVD TiAlN-TiN-coated WNVG 080404-IC907 carbide insert was used during the turning process. The results showed that with increasing cutting speed, feed rate and depth of cut, the tool temperature increased; the cutting speed was found to be the most effective parameter in assessing the temperature rise. The heat distribution of the cutting tool, tool-chip interface and workpiece provided effective and useful data for the optimization of selected cutting parameters during orthogonal machining. PMID:25587976

  14. External attachment of titanium sheathed thermocouples to zirconium nuclear fuel rods for the loss-of-fluid-test (LOFT) Reactor

    International Nuclear Information System (INIS)

    Welty, R.K.

    1980-01-01

    A welding process to attach titanium sheathed thermocouples to the outside of the zircaloy clad fuel rods has been developed. A laser beam was selected as the optimum welding process because of the extremely high energy input per unit volume that can be achieved allowing local fusion of a small area irrespective of the difference in material thickness to be joined. Irradiation tests showed no degradation of thermocouples or weld structure. Fast thermal cycle and heater rod blowdown reflood tests were made to subject the weldments to high temperatures, high pressure steam, and fast water quench cycles. From the behavior of these tests, it was concluded that the attachment welds would survive a series of reactor safety tests. 2 refs

  15. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    Science.gov (United States)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  16. Description of an identification method of thermocouple time constant based on application of recursive numerical filtering to temperature fluctuation

    International Nuclear Information System (INIS)

    Bernardin, B.; Le Guillou, G.; Parcy, JP.

    1981-04-01

    Usual spectral methods, based on temperature fluctuation analysis, aiming at thermocouple time constant identification are using an equipment too much sophisticated for on-line application. It is shown that numerical filtering is optimal for this application, the equipment is simpler than for spectral methods and less samples of signals are needed for the same accuracy. The method is described and a parametric study was performed using a temperature noise simulator [fr

  17. Modelling and evaluating against the violent insider

    International Nuclear Information System (INIS)

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-01-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, the authors present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. The authors then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, the authors discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence

  18. Modelling and evaluating against the violent insider

    International Nuclear Information System (INIS)

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-07-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, we present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. We then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, we discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence

  19. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  20. New fixed-point mini-cell to investigate thermocouple drift in a high-temperature environment under neutron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Laurie, M.; Vlahovic, L.; Rondinella, V.V. [European Commission, Joint Research Centre, Institute for Transuranium Elements, P.O. Box 2340, D-76125 Karlsruhe, (Germany); Sadli, M.; Failleau, G. [Laboratoire Commun de Metrologie, LNE-Cnam, Saint-Denis, (France); Fuetterer, M.; Lapetite, J.M. [European Commission, Joint Research Centre, Institute for Energy and Transport, P.O. Box 2, NL-1755 ZG Petten, (Netherlands); Fourrez, S. [Thermocoax, 8 rue du pre neuf, F-61100 St Georges des Groseillers, (France)

    2015-07-01

    Temperature measurements in the nuclear field require a high degree of reliability and accuracy. Despite their sheathed form, thermocouples subjected to nuclear radiations undergo changes due to radiation damage and transmutation that lead to significant EMF drift during long-term fuel irradiation experiment. For the purpose of a High Temperature Reactor fuel irradiation to take place in the High Flux Reactor Petten, a dedicated fixed-point cell was jointly developed by LNE-Cnam and JRC-IET. The developed cell to be housed in the irradiation rig was tailor made to quantify the thermocouple drift during the irradiation (about two year duration) and withstand high temperature (in the range 950 deg. C - 1100 deg. C) in the presence of contaminated helium in a graphite environment. Considering the different levels of temperature achieved in the irradiation facility and the large palette of thermocouple types aimed at surveying the HTR fuel pebble during the qualification test both copper (1084.62 deg. C) and gold (1064.18 deg. C) fixed-point materials were considered. The aim of this paper is to first describe the fixed-point mini-cell designed to be embedded in the reactor rig and to discuss the preliminary results achieved during some out of pile tests as much as some robustness tests representative of the reactor scram scenarios. (authors)

  1. Airline service quality evaluation: A review on concepts and models

    OpenAIRE

    Navid Haghighat

    2017-01-01

    This paper reviews different major service quality concept and models which led to great developments in evaluating service quality with focusing on improvement process of the models through discussing criticisms of each model. Criticisms against these models are discussed to clarify development steps of newer models which led to the improvement of airline service quality models. The precise and accurate evaluation of service quality needs utilizing a reliable concept with comprehensive crite...

  2. Use of indexed sensitivity factors in the analysis of nickel and iron based alloys: study of the decalibration of sheathed Chromel/Alumel thermocouples

    International Nuclear Information System (INIS)

    Christie, W.H.

    1978-01-01

    Sheathed Chromel versus Alumel thermocouples decalibrate when exposed to temperatures in excess of 1100 0 C. Thermocouples sheathed in Inconel-600 and type 304 stainless steel were studied in this work. Quantified SIMS data showed that the observed decalibrations were due to significant alterations that took place in the Chromel and Alumel thermoelements. The amount of alteration was different for each thermocouple and was influenced by the particular sheath material used in the thermocouple construction. Relative sensitivity factors, indexed by a matrix ion species ratio, were used to quantify SIMS data for three nickel-based alloys, Chromel, Alumel, and Inconel-600, and an iron-based alloy, type 304 stainless steel. Oxygen pressure >2 x 10 -6 torr in the sputtering region gave enhanced sensitivity and superior quantitative results as compared to data obtained at instrumental residual pressure

  3. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  4. Nuclear safety culture evaluation model based on SSE-CMM

    International Nuclear Information System (INIS)

    Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Peng Guojian

    2012-01-01

    Safety culture, which is of great significance to establish safety objectives, characterizes level of enterprise safety production and development. Traditional safety culture evaluation models emphasis on thinking and behavior of individual and organization, and pay attention to evaluation results while ignore process. Moreover, determining evaluation indicators lacks objective evidence. A novel multidimensional safety culture evaluation model, which has scientific and completeness, is addressed by building an preliminary mapping between safety culture and SSE-CMM's (Systems Security Engineering Capability Maturity Model) process area and generic practice. The model focuses on enterprise system security engineering process evaluation and provides new ideas and scientific evidences for the study of safety culture. (authors)

  5. Presenting an Evaluation Model for the Cancer Registry Software.

    Science.gov (United States)

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  6. The influence of lead temperature on the accuracy of various stainless-steel sheathed, mineral-inulated nickel-chromium/nickel aluminium thermocouples

    International Nuclear Information System (INIS)

    Burnett, P.; Burns, J.S.

    1977-10-01

    Samples of three types of stainless steel sheathed MI thermocouples, such as are currently used in fire and furnace tests of transport flasks, have been subjected to high lead temperatures whilst the thermojunctions were kept at a constant low temperature. Both the lead temperature and the length of lead at temperature have been varied. As the lead temperature rises from ambient to a selected value, the emf output from the thermocouple initially decreases and then increases, taking up a final value dependent on the particular conditions. Below a threshold lead temperature, no significant steady state error occurs and the negative transient is generally negligible. Each thermocouple has its own threshold temperature, the lowest found being about 600 0 C, although the average lies at about 750 0 C. Above the threshold lead temperature, the thermal emf can be in error by the equivalent of more than 100 0 C, the highest error found being nearly 230 0 C at a temperature 250 0 C above threshold. The same thermocouple showed a negative transient of 13 0 C 3 minutes after start of heating to 890 0 C. It is probable that the steady state error arises because of the degradation of the thermocouple mineral insulation at elevated temperatures and recommendations are made on the use of such thermocouples in fire and furnace tests. The cause of the initial negative transient error has not been identified, but ways of minimising any resultant errors are suggested. (author)

  7. A Grid of Fine Wire Thermocouples to Study the Spatial Coherence of Turbulence within Katabatic Flow through a Vineyard Canopy

    Science.gov (United States)

    Everard, K.; Christen, A.; Sturman, A.; Skaloud, P.

    2016-12-01

    Knowledge of the dynamics and thermodynamics of katabatic flow is relevant in vineyards, where grapevines are sensitive to temperature changes (frost protection and cooling). Basic understanding of the occurrence and evolution of, and turbulence within, katabatic flow is well known over bare slopes. However, little work has been completed to extend this understanding to mid-sized canopies and how the presence of a canopy affects the turbulent exchange of momentum and heat within the flow. Measurements were carried out over a 6° vineyard slope near Oliver, BC, Canada in the Okanagan Valley between July 5 and July 22, 2016. The set-up consisted of an array of five vertically arranged CSAT 3D (Campbell Scientific, Inc.) ultrasonic anemometers at z = 0.45 m, 0.90 m, 1.49 m, 2.34 m, and 4.73 m above ground level (AGL), and a 2-D grid of 40 Type-E (copper-constantan) fine-wire thermocouples (FWTC) arranged at the same heights as the CSAT 3D array on 8 masts extending in the upslope (flow) direction at locations x = 0.0 m (CSAT 3D tower), 0.5 m, 1.0 m, 2.0 m, 4.0 m, 8.0 m, 16.0 m, and 32.0 m. The FWTC array formed a sheet of 40 sampling points in the upslope-vertical plane. The height of the grapevine canopy (h) was approximately 2 m AGL, and rows were aligned along the local slope direction with a row spacing of 2.45 m. CSAT-3s were sampled at 60 Hz with 20 Hz data recording, the FWTCs were sampled at 2 Hz, all synchronized by a data logger. Katabatic flow was observed on several nights during the campaign, with a wind speed maximum located within the canopy. This contribution will focus on the measurement techniques, combining ultrasonic anemometer data with the spatially synchronized FWTC array using image process techniques. We identify the dynamics and structure of the katabatic flow, relevant for heat exchange, using the spatial coherence of the temperature field given by the FWTC array. Improved knowledge of the vertical structure and the dynamics of katabatic

  8. Modeling a support system for the evaluator

    International Nuclear Information System (INIS)

    Lozano Lima, B.; Ilizastegui Perez, F; Barnet Izquierdo, B.

    1998-01-01

    This work gives evaluators a tool they can employ to give more soundness to their review of operational limits and conditions. The system will establish the most adequate method to carry out the evaluation, as well as to evaluate the basis for technical operational specifications. It also includes the attainment of alternative questions to be supplied to the operating entity to support it in decision-making activities

  9. Uniformity index measurement technology using thermocouples to improve performance in urea-selective catalytic reduction systems

    Science.gov (United States)

    Park, Sangki; Oh, Jungmo

    2018-05-01

    The current commonly used nitrogen oxides (NOx) emission reduction techniques employ hydrocarbons (HCs), urea solutions, and exhaust gas emissions as the reductants. Two of the primary denitrification NOx (DeNOx) catalyst systems are the HC-lean NOx trap (HC-LNT) catalyst and urea-selective catalytic reduction (urea-SCR) catalyst. The secondary injection method depends on the type of injector, injection pressure, atomization, and spraying technique. In addition, the catalyst reaction efficiency is directly affected by the distribution of injectors; hence, the uniformity index (UI) of the reductant is very important and is the basis for system optimization. The UI of the reductant is an indicator of the NOx conversion efficiency (NCE), and good UI values can reduce the need for a catalyst. Therefore, improving the UI can reduce the cost of producing a catalytic converter, which are expensive due to the high prices of the precious metals contained therein. Accordingly, measurement of the UI is an important process in the development of catalytic systems. Two of the commonly used methods for measuring the reductant UI are (i) measuring the exhaust emissions at many points located upstream/downstream of the catalytic converter and (ii) acquisition of a reductant distribution image on a section of the exhaust pipe upstream of the catalytic converter. The purpose of this study is to develop a system and measurement algorithms to measure the exothermic response distribution in the exhaust gas as the reductant passes through the catalytic converter of the SCR catalyst system using a set of thermocouples downstream of the SCR catalyst. The system is used to measure the reductant UI, which is applied in real-time to the actual SCR system, and the results are compared for various types of mixtures for various engine operating conditions and mixer types in terms of NCE.

  10. Statistical models of shape optimisation and evaluation

    CERN Document Server

    Davies, Rhodri; Taylor, Chris

    2014-01-01

    Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.

  11. Evaluation of EOR Processes Using Network Models

    DEFF Research Database (Denmark)

    Winter, Anatol; Larsen, Jens Kjell; Krogsbøll, Anette

    1998-01-01

    The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels......) including estimation of their "petrophysical" properties (e.g. absolute permeability). 3) Mathematical modelling and computer studies of multiphase transport through pore space using mathematical network models. 4) Investigation of link between pore-scale and macroscopic recovery mechanisms....

  12. The Use of AMET and Automated Scripts for Model Evaluation

    Science.gov (United States)

    The Atmospheric Model Evaluation Tool (AMET) is a suite of software designed to facilitate the analysis and evaluation of meteorological and air quality models. AMET matches the model output for particular locations to the corresponding observed values from one or more networks ...

  13. Evaluation of global climate models for Indian monsoon climatology

    International Nuclear Information System (INIS)

    Kodra, Evan; Ganguly, Auroop R; Ghosh, Subimal

    2012-01-01

    The viability of global climate models for forecasting the Indian monsoon is explored. Evaluation and intercomparison of model skills are employed to assess the reliability of individual models and to guide model selection strategies. Two dominant and unique patterns of Indian monsoon climatology are trends in maximum temperature and periodicity in total rainfall observed after 30 yr averaging over India. An examination of seven models and their ensembles reveals that no single model or model selection strategy outperforms the rest. The single-best model for the periodicity of Indian monsoon rainfall is the only model that captures a low-frequency natural climate oscillator thought to dictate the periodicity. The trend in maximum temperature, which most models are thought to handle relatively better, is best captured through a multimodel average compared to individual models. The results suggest a need to carefully evaluate individual models and model combinations, in addition to physical drivers where possible, for regional projections from global climate models. (letter)

  14. Issues in Value-at-Risk Modeling and Evaluation

    NARCIS (Netherlands)

    J. Daníelsson (Jón); C.G. de Vries (Casper); B.N. Jorgensen (Bjørn); P.F. Christoffersen (Peter); F.X. Diebold (Francis); T. Schuermann (Til); J.A. Lopez (Jose); B. Hirtle (Beverly)

    1998-01-01

    textabstractDiscusses the issues in value-at-risk modeling and evaluation. Value of value at risk; Horizon problems and extreme events in financial risk management; Methods of evaluating value-at-risk estimates.

  15. MARKET EVALUATION MODEL: TOOL FORBUSINESS DECISIONS

    OpenAIRE

    Porlles Loarte, José; Yenque Dedios, Julio; Lavado Soto, Aurelio

    2014-01-01

    In the present work the concepts of potential market and global market are analyzed as the basis for strategic decisions of market with long term perspectives, when the implantation of a business in certain geographic area is evaluated. On this conceptual frame, the methodological tool is proposed to evaluate a commercial decision, for which it is taken as reference the case from the brewing industry in Peru, considering that this industry faces in the region entrepreneurial reorderings withi...

  16. A Regional Climate Model Evaluation System

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop a packaged data management infrastructure for the comparison of generated climate model output to existing observational datasets that includes capabilities...

  17. QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL

    Directory of Open Access Journals (Sweden)

    Mirna Macur

    2016-01-01

    Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.

  18. Evaluating Energy Efficiency Policies with Energy-Economy Models

    Energy Technology Data Exchange (ETDEWEB)

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  19. evaluation of models for assessing groundwater vulnerability

    African Journals Online (AJOL)

    DR. AMINU

    applied models for groundwater vulnerability assessment mapping. The appraoches .... The overall 'pollution potential' or DRASTIC index is established by applying the formula: DRASTIC Index: ... affected by the structure of the soil surface.

  20. Models for Evaluating and Improving Architecture Competence

    National Research Council Canada - National Science Library

    Bass, Len; Clements, Paul; Kazman, Rick; Klein, Mark

    2008-01-01

    ... producing high-quality architectures. This report lays out the basic concepts of software architecture competence and describes four models for explaining, measuring, and improving the architecture competence of an individual...

  1. The CREATIVE Decontamination Performance Evaluation Model

    National Research Council Canada - National Science Library

    Shelly, Erin E

    2008-01-01

    The project objective is to develop a semi-empirical, deterministic model to characterize and predict laboratory-scale decontaminant efficacy and hazards for a range of: chemical agents (current focus on HD...

  2. Evaluation of Model Wheat/Hemp Composites

    Directory of Open Access Journals (Sweden)

    Ivan Švec

    2014-02-01

    Full Text Available Model cereal blends were prepared from commercial wheat fine flour and 5 samples of hemp flour (HF, including fine (2 of conventional form, 1 of organic form and wholemeal type (2 of conventional form. Wheat flour was substituted in 4 levels (5, 10, 15, 20%. HF addition has increased protein content independently on tested hemp flour form or type. Partial model cereal blends could be distinguished according to protein quality (Zeleny test values, especially between fine and wholemeal HF type. Both flour types affected also amylolytic activity, for which a relationship between hemp addition and determined level of Falling Number was confirmed for all five model cereal blends. Solvent retention capacity profiles (SRC of partial models were influenced by both HF form and type, as well as by its addition level. Between both mentioned groups of quality features, significant correlation were proved - relationships among protein content/quality and lactic acid SRC were verifiable on p <0.01 (-0.58, 0.91, respectively. By performed ANOVA, a possibility to distinguish the HF form used in model cereal blend according to the lactic acid SRC and the water SRC was demonstrated. Comparing partial cereal models containing fine and wholemeal hemp type, HF addition level demonstrated its impact on the sodium carbonate SRC and the water acid SRC. Normal 0 21 false false false CS JA X-NONE

  3. Evaluation model development for sprinkler irrigation uniformity ...

    African Journals Online (AJOL)

    use

    Sprinkle and trickle irrigation. The. Blackburn Press, New Jersey, USA. Li JS, Rao MJ (1999). Evaluation method of sprinkler irrigation nonuniformity. Trans. CSAE. 15(4): 78-82. Lin Z, Merkley GP (2011). Relationships between common irrigation application uniformity indicators. Irrig Sci. Online First™, 27 January. 2011.

  4. Evaluation model development for sprinkler irrigation uniformity ...

    African Journals Online (AJOL)

    A new evaluation method with accompanying software was developed to precisely calculate uniformity from catch-can test data, assuming sprinkler distribution data to be a continuous variable. Two interpolation steps are required to compute unknown water application depths at grid distribution points from radial ...

  5. [Decision modeling for economic evaluation of health technologies].

    Science.gov (United States)

    de Soárez, Patrícia Coelho; Soares, Marta Oliveira; Novaes, Hillegonda Maria Dutilh

    2014-10-01

    Most economic evaluations that participate in decision-making processes for incorporation and financing of technologies of health systems use decision models to assess the costs and benefits of the compared strategies. Despite the large number of economic evaluations conducted in Brazil, there is a pressing need to conduct an in-depth methodological study of the types of decision models and their applicability in our setting. The objective of this literature review is to contribute to the knowledge and use of decision models in the national context of economic evaluations of health technologies. This article presents general definitions about models and concerns with their use; it describes the main models: decision trees, Markov chains, micro-simulation, simulation of discrete and dynamic events; it discusses the elements involved in the choice of model; and exemplifies the models addressed in national economic evaluation studies of diagnostic and therapeutic preventive technologies and health programs.

  6. A qualitative evaluation approach for energy system modelling frameworks

    DEFF Research Database (Denmark)

    Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord

    2018-01-01

    properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...

  7. Simulation of electric power conservation strategies: model of economic evaluation

    International Nuclear Information System (INIS)

    Pinhel, A.C.C.

    1992-01-01

    A methodology for the economic evaluation model for energy conservation programs to be executed by the National Program of Electric Power Conservation is presented. From data as: forecasting of conserved energy, tariffs, energy costs and budget, the model calculates the economic indexes for the programs, allowing the evaluation of economic impacts in the electric sector. (C.G.C.)

  8. The Use of AMET & Automated Scripts for Model Evaluation

    Science.gov (United States)

    Brief overview of EPA’s new CMAQ website to be launched publically in June, 2017. Details on the upcoming release of the Atmospheric Model Evaluation Tool (AMET) and the creation of automated scripts for post-processing and evaluating air quality model data.

  9. Modelling in Evaluating a Working Life Project in Higher Education

    Science.gov (United States)

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  10. Scanning thermal microscopy based on a quartz tuning fork and a micro-thermocouple in active mode (2ω method)

    International Nuclear Information System (INIS)

    Bontempi, Alexia; Nguyen, Tran Phong; Salut, Roland; Thiery, Laurent; Teyssieux, Damien; Vairac, Pascal

    2016-01-01

    A novel probe for scanning thermal microscope using a micro-thermocouple probe placed on a Quartz Tuning Fork (QTF) is presented. Instead of using an external deflection with a cantilever beam for contact detection, an original combination of piezoelectric resonator and thermal probe is employed. Due to a non-contact photothermal excitation principle, the high quality factor of the QTF allows the probe-to-surface contact detection. Topographic and thermal scanning images obtained on a specific sample points out the interest of our system as an alternative to cantilevered resistive probe systems which are the most spread.

  11. Scanning thermal microscopy based on a quartz tuning fork and a micro-thermocouple in active mode (2ω method).

    Science.gov (United States)

    Bontempi, Alexia; Nguyen, Tran Phong; Salut, Roland; Thiery, Laurent; Teyssieux, Damien; Vairac, Pascal

    2016-06-01

    A novel probe for scanning thermal microscope using a micro-thermocouple probe placed on a Quartz Tuning Fork (QTF) is presented. Instead of using an external deflection with a cantilever beam for contact detection, an original combination of piezoelectric resonator and thermal probe is employed. Due to a non-contact photothermal excitation principle, the high quality factor of the QTF allows the probe-to-surface contact detection. Topographic and thermal scanning images obtained on a specific sample points out the interest of our system as an alternative to cantilevered resistive probe systems which are the most spread.

  12. Scanning thermal microscopy based on a quartz tuning fork and a micro-thermocouple in active mode (2ω method)

    Energy Technology Data Exchange (ETDEWEB)

    Bontempi, Alexia; Nguyen, Tran Phong; Salut, Roland; Thiery, Laurent; Teyssieux, Damien; Vairac, Pascal [FEMTO-ST Institute UMR 6174, Université de Franche-Comté, CNRS, ENSMM, UTBM, 15B Avenue des Montboucons, F-25030 Besançon (France)

    2016-06-15

    A novel probe for scanning thermal microscope using a micro-thermocouple probe placed on a Quartz Tuning Fork (QTF) is presented. Instead of using an external deflection with a cantilever beam for contact detection, an original combination of piezoelectric resonator and thermal probe is employed. Due to a non-contact photothermal excitation principle, the high quality factor of the QTF allows the probe-to-surface contact detection. Topographic and thermal scanning images obtained on a specific sample points out the interest of our system as an alternative to cantilevered resistive probe systems which are the most spread.

  13. Study of Thermocurrents in ILC cavities via measurements of the Seebeck Effect in niobium, titanium, and stainless steel thermocouples

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, Victoria [Univ. of Wisconsin, Madison, WI (United States)

    2014-01-01

    The goals of Fermilab’s Superconductivity and Radio Frequency Development Department are to engineer, fabricate, and improve superconducting radio frequency (SCRF) cavities in the interest of advancing accelerator technology. Improvement includes exploring possible limitations on cavity performance and mitigating such impediments. This report focuses on investigating and measuring the Seebeck Effect observed in cavity constituents titanium, niobium, and stainless steel arranged in thermocouples. These junctions exist between cavities, helium jackets, and bellows, and their connection can produce a loop of electrical current and magnetic flux spontaneously during cooling. The experimental procedure and results are described and analyzed. Implications relating the results to cavity performance are discussed.

  14. Attachment of lead wires to thin film thermocouples mounted on high temperature materials using the parallel gap welding process

    Science.gov (United States)

    Holanda, Raymond; Kim, Walter S.; Pencil, Eric; Groth, Mary; Danzey, Gerald A.

    1990-01-01

    Parallel gap resistance welding was used to attach lead wires to sputtered thin film sensors. Ranges of optimum welding parameters to produce an acceptable weld were determined. The thin film sensors were Pt13Rh/Pt thermocouples; they were mounted on substrates of MCrAlY-coated superalloys, aluminum oxide, silicon carbide and silicon nitride. The entire sensor system is designed to be used on aircraft engine parts. These sensor systems, including the thin-film-to-lead-wire connectors, were tested to 1000 C.

  15. Evaluation of biological models using Spacelab

    Science.gov (United States)

    Tollinger, D.; Williams, B. A.

    1980-01-01

    Biological models of hypogravity effects are described, including the cardiovascular-fluid shift, musculoskeletal, embryological and space sickness models. These models predict such effects as loss of extracellular fluid and electrolytes, decrease in red blood cell mass, and the loss of muscle and bone mass in weight-bearing portions of the body. Experimentation in Spacelab by the use of implanted electromagnetic flow probes, by fertilizing frog eggs in hypogravity and fixing the eggs at various stages of early development and by assessing the role of the vestibulocular reflex arc in space sickness is suggested. It is concluded that the use of small animals eliminates the uncertainties caused by corrective or preventive measures employed with human subjects.

  16. Shock circle model for ejector performance evaluation

    International Nuclear Information System (INIS)

    Zhu, Yinhai; Cai, Wenjian; Wen, Changyun; Li, Yanzhong

    2007-01-01

    In this paper, a novel shock circle model for the prediction of ejector performance at the critical mode operation is proposed. By introducing the 'shock circle' at the entrance of the constant area chamber, a 2D exponential expression for velocity distribution is adopted to approximate the viscosity flow near the ejector inner wall. The advantage of the 'shock circle' analysis is that the calculation of ejector performance is independent of the flows in the constant area chamber and diffuser. Consequently, the calculation is even simpler than many 1D modeling methods and can predict the performance of critical mode operation ejectors much more accurately. The effectiveness of the method is validated by two experimental results reported earlier. The proposed modeling method using two coefficients is shown to produce entrainment ratio, efficiency and coefficient of performance (COP) accurately and much closer to experimental results than those of 1D analysis methods

  17. A Model for Telestrok Network Evaluation

    DEFF Research Database (Denmark)

    Storm, Anna; Günzel, Franziska; Theiss, Stephan

    2011-01-01

    analysis lacking, current telestroke reimbursement by third-party payers is limited to special contracts and not included in the regular billing system. Based on a systematic literature review and expert interviews with health care economists, third-party payers and neurologists, a Markov model...... was developed from the third-party payer perspective. In principle, it enables telestroke networks to conduct cost-effectiveness studies, because the majority of the required data can be extracted from health insurance companies’ databases and the telestroke network itself. The model presents a basis...

  18. p-values for model evaluation

    International Nuclear Information System (INIS)

    Beaujean, F.; Caldwell, A.; Kollar, D.; Kroeninger, K.

    2011-01-01

    Deciding whether a model provides a good description of data is often based on a goodness-of-fit criterion summarized by a p-value. Although there is considerable confusion concerning the meaning of p-values, leading to their misuse, they are nevertheless of practical importance in common data analysis tasks. We motivate their application using a Bayesian argumentation. We then describe commonly and less commonly known discrepancy variables and how they are used to define p-values. The distribution of these are then extracted for examples modeled on typical data analysis tasks, and comments on their usefulness for determining goodness-of-fit are given.

  19. Center for Integrated Nanotechnologies (CINT) Chemical Release Modeling Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Stirrup, Timothy Scott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-12-20

    This evaluation documents the methodology and results of chemical release modeling for operations at Building 518, Center for Integrated Nanotechnologies (CINT) Core Facility. This evaluation is intended to supplement an update to the CINT [Standalone] Hazards Analysis (SHA). This evaluation also updates the original [Design] Hazards Analysis (DHA) completed in 2003 during the design and construction of the facility; since the original DHA, additional toxic materials have been evaluated and modeled to confirm the continued low hazard classification of the CINT facility and operations. This evaluation addresses the potential catastrophic release of the current inventory of toxic chemicals at Building 518 based on a standard query in the Chemical Information System (CIS).

  20. Statistical modeling for visualization evaluation through data fusion.

    Science.gov (United States)

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Evaluation Model of Tea Industry Information Service Quality

    OpenAIRE

    Shi , Xiaohui; Chen , Tian’en

    2015-01-01

    International audience; According to characteristics of tea industry information service, this paper have built service quality evaluation index system for tea industry information service quality, R-cluster analysis and multiple regression have been comprehensively used to contribute evaluation model with a high practice and credibility. Proved by the experiment, the evaluation model of information service quality has a good precision, which has guidance significance to a certain extent to e...

  2. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  3. Model visualization for evaluation of biocatalytic processes

    DEFF Research Database (Denmark)

    Law, HEM; Lewis, DJ; McRobbie, I

    2008-01-01

    Biocatalysis offers great potential as an additional, and in some cases as an alternative, synthetic tool for organic chemists, especially as a route to introduce chirality. However, the implementation of scalable biocatalytic processes nearly always requires the introduction of process and/or bi......,S-EDDS), a biodegradable chelant, and is characterised by the use of model visualization using `windows of operation"....

  4. Evaluating a Model of Youth Physical Activity

    Science.gov (United States)

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  5. An evaluation of uncertainties in radioecological models

    International Nuclear Information System (INIS)

    Hoffmann, F.O.; Little, C.A.; Miller, C.W.; Dunning, D.E. Jr.; Rupp, E.M.; Shor, R.W.; Schaeffer, D.L.; Baes, C.F. III

    1978-01-01

    The paper presents results of analyses for seven selected parameters commonly used in environmental radiological assessment models, assuming that the available data are representative of the true distribution of parameter values and that their respective distributions are lognormal. Estimates of the most probable, median, mean, and 99th percentile for each parameter are fiven and compared to U.S. NRC default values. The regulatory default values are generally greater than the median values for the selected parameters, but some are associated with percentiles significantly less than the 50th. The largest uncertainties appear to be associated with aquatic bioaccumulation factors for fresh water fish. Approximately one order of magnitude separates median values and values of the 99th percentile. The uncertainty is also estimated for the annual dose rate predicted by a multiplicative chain model for the transport of molecular iodine-131 via the air-pasture-cow-milk-child's thyroid pathway. The value for the 99th percentile is ten times larger than the median value of the predicted dose normalized for a given air concentration of 131 I 2 . About 72% of the uncertainty in this model is contributed by the dose conversion factor and the milk transfer coefficient. Considering the difficulties in obtaining a reliable quantification of the true uncertainties in model predictions, methods for taking these uncertainties into account when determining compliance with regulatory statutes are discussed. (orig./HP) [de

  6. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  7. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 2. MODEL EVALUATION

    Science.gov (United States)

    The multilayer biochemical dry deposition model (MLBC) described in the accompanying paper was tested against half-hourly eddy correlation data from six field sites under a wide range of climate conditions with various plant types. Modeled CO2, O3, SO2<...

  8. Metrics and Evaluation Models for Accessible Television

    DEFF Research Database (Denmark)

    Li, Dongxiao; Looms, Peter Olaf

    2014-01-01

    The adoption of the UN Convention on the Rights of Persons with Disabilities (UN CRPD) in 2006 has provided a global framework for work on accessibility, including information and communication technologies and audiovisual content. One of the challenges facing the application of the UN CRPD...... number of platforms on which audiovisual content needs to be distributed, requiring very clear multiplatform architectures to facilitate interworking and assure interoperability. As a consequence, the regular evaluations of progress being made by signatories to the UN CRPD protocol are difficult...

  9. RTMOD: Real-Time MODel evaluation

    DEFF Research Database (Denmark)

    Graziani, G.; Galmarini, S.; Mikkelsen, Torben

    2000-01-01

    the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...

  10. A Descriptive Evaluation of Software Sizing Models

    Science.gov (United States)

    1987-09-01

    2-22 2.3.2 SPQR Sizer/FP ............................... 2-25 2.3.3 QSM Size Planner: Function Points .......... 2-26 2.3.4 Feature...Characteristics ............................. 4-20 4.5.3 Results and Conclusions ..................... 4-20 4.6 Application of the SPQR SIZER/FP Approach...4-19 4-7 SPQR Function Point Estimate for the CATSS Sensitivity Model .................................................. 4-23 4-8 ASSET-R

  11. Automated expert modeling for automated student evaluation.

    Energy Technology Data Exchange (ETDEWEB)

    Abbott, Robert G.

    2006-01-01

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.

  12. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. A model for photothermal responses of flowering in rice. II. Model evaluation.

    NARCIS (Netherlands)

    Yin, X.; Kropff, M.J.; Nakagawa, H.; Horie, T.; Goudriaan, J.

    1997-01-01

    A detailed nonlinear model, the 3s-Beta model, for photothermal responses of flowering in rice (Oryza sativa L.) was evaluated for predicting rice flowering date in field conditions. This model was compared with other three models: a three-plane linear model and two nonlinear models, viz, the

  14. Evaluating the double Poisson generalized linear model.

    Science.gov (United States)

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Design Concept Evaluation Using System Throughput Model

    International Nuclear Information System (INIS)

    Sequeira, G.; Nutt, W. M.

    2004-01-01

    The U.S. Department of Energy (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is currently developing the technical bases to support the submittal of a license application for construction of a geologic repository at Yucca Mountain, Nevada to the U.S. Nuclear Regulatory Commission. The Office of Repository Development (ORD) is responsible for developing the design of the proposed repository surface facilities for the handling of spent nuclear fuel and high level nuclear waste. Preliminary design activities are underway to sufficiently develop the repository surface facilities design for inclusion in the license application. The design continues to evolve to meet mission needs and to satisfy both regulatory and program requirements. A system engineering approach is being used in the design process since the proposed repository facilities are dynamically linked by a series of sub-systems and complex operations. In addition, the proposed repository facility is a major system element of the overall waste management process being developed by the OCRWM. Such an approach includes iterative probabilistic dynamic simulation as an integral part of the design evolution process. A dynamic simulation tool helps to determine if: (1) the mission and design requirements are complete, robust, and well integrated; (2) the design solutions under development meet the design requirements and mission goals; (3) opportunities exist where the system can be improved and/or optimized; and (4) proposed changes to the mission, and design requirements have a positive or negative impact on overall system performance and if design changes may be necessary to satisfy these changes. This paper will discuss the type of simulation employed to model the waste handling operations. It will then discuss the process being used to develop the Yucca Mountain surface facilities model. The latest simulation model and the results of the simulation and how the data were used in the design

  16. A random walk model to evaluate autism

    Science.gov (United States)

    Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

    2018-02-01

    A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

  17. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  18. Promoting Excellence in Nursing Education (PENE): Pross evaluation model.

    Science.gov (United States)

    Pross, Elizabeth A

    2010-08-01

    The purpose of this article is to examine the Promoting Excellence in Nursing Education (PENE) Pross evaluation model. A conceptual evaluation model, such as the one described here, may be useful to nurse academicians in the ongoing evaluation of educational programs, especially those with goals of excellence. Frameworks for evaluating nursing programs are necessary because they offer a way to systematically assess the educational effectiveness of complex nursing programs. This article describes the conceptual framework and its tenets of excellence. Copyright 2009 Elsevier Ltd. All rights reserved.

  19. ECOPATH: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Bergstroem, U.; Nordlinder, S.

    1996-01-01

    The model is based upon compartment theory and it is run in combination with a statistical error propagation method (PRISM, Gardner et al. 1983). It is intended to be generic for application on other sites with simple changing of parameter values. It was constructed especially for this scenario. However, it is based upon an earlier designed model for calculating relations between released amount of radioactivity and doses to critical groups (used for Swedish regulations concerning annual reports of released radioactivity from routine operation of Swedish nuclear power plants (Bergstroem och Nordlinder, 1991)). The model handles exposure from deposition on terrestrial areas as well as deposition on lakes, starting with deposition values. 14 refs, 16 figs, 7 tabs

  20. Evaluation of atmospheric dispersion/consequence models supporting safety analysis

    International Nuclear Information System (INIS)

    O'Kula, K.R.; Lazaro, M.A.; Woodard, K.

    1996-01-01

    Two DOE Working Groups have completed evaluation of accident phenomenology and consequence methodologies used to support DOE facility safety documentation. The independent evaluations each concluded that no one computer model adequately addresses all accident and atmospheric release conditions. MACCS2, MATHEW/ADPIC, TRAC RA/HA, and COSYMA are adequate for most radiological dispersion and consequence needs. ALOHA, DEGADIS, HGSYSTEM, TSCREEN, and SLAB are recommended for chemical dispersion and consequence applications. Additional work is suggested, principally in evaluation of new models, targeting certain models for continued development, training, and establishing a Web page for guidance to safety analysts

  1. Internal attachment of laser beam welded stainless steel sheathed thermocouples into stainless steel upper end caps in nuclear fuel rods for the LOFT Reactor

    International Nuclear Information System (INIS)

    Welty, R.K.; Reid, R.D.

    1980-01-01

    The Exxon Nuclear Company, Inc., acting as a subcontractor to EG and G Idaho Inc., Idaho National Engineering Laboratory, Idaho Falls, Idaho, conducted a laser beam welding study to attach internal stainless steel thermocouples into stainless steel upper end caps in nuclear fuel rods. The objective of this study was to determine the feasibility of laser welding a single 0.063 inch diameter stainless steel (304) sheathed thermocouple into a stainless steel (316) upper end cap for nuclear fuel rods. A laser beam was selected because of the extremely high energy input in unit volume that can be achieved allowing local fusion of a small area irrespective of the difference in material thickness to be joined. A special weld fixture was designed and fabricated to hold the end cap and the thermocouple with angular and rotational adjustment under the laser beam. A commercial pulsed laser and energy control system was used to make the welds

  2. FARMLAND: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Attwood, C.; Fayers, C.; Mayall, A.; Brown, J.; Simmonds, J.R.

    1996-01-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs

  3. FARMLAND: Model description and evaluation of model performance

    Energy Technology Data Exchange (ETDEWEB)

    Attwood, C; Fayers, C; Mayall, A; Brown, J; Simmonds, J R [National Radiological Protection Board, Chilton (United Kingdom)

    1996-09-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs.

  4. Evaluating energy saving system of data centers based on AHP and fuzzy comprehensive evaluation model

    Science.gov (United States)

    Jiang, Yingni

    2018-03-01

    Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.

  5. Biology learning evaluation model in Senior High Schools

    Directory of Open Access Journals (Sweden)

    Sri Utari

    2017-06-01

    Full Text Available The study was to develop a Biology learning evaluation model in senior high schools that referred to the research and development model by Borg & Gall and the logic model. The evaluation model included the components of input, activities, output and outcomes. The developing procedures involved a preliminary study in the form of observation and theoretical review regarding the Biology learning evaluation in senior high schools. The product development was carried out by designing an evaluation model, designing an instrument, performing instrument experiment and performing implementation. The instrument experiment involved teachers and Students from Grade XII in senior high schools located in the City of Yogyakarta. For the data gathering technique and instrument, the researchers implemented observation sheet, questionnaire and test. The questionnaire was applied in order to attain information regarding teacher performance, learning performance, classroom atmosphere and scientific attitude; on the other hand, test was applied in order to attain information regarding Biology concept mastery. Then, for the analysis of instrument construct, the researchers performed confirmatory factor analysis by means of Lisrel 0.80 software and the results of this analysis showed that the evaluation instrument valid and reliable. The construct validity was between 0.43-0.79 while the reliability of measurement model was between 0.88-0.94. Last but not the least, the model feasibility test showed that the theoretical model had been supported by the empirical data.

  6. Using Models of Cognition in HRI Evaluation and Design

    National Research Council Canada - National Science Library

    Goodrich, Michael A

    2004-01-01

    ...) guide the construction of experiments. In this paper, we present an information processing model of cognition that we have used extensively in designing and evaluating interfaces and autonomy modes...

  7. Evaluation of one dimensional analytical models for vegetation canopies

    Science.gov (United States)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  8. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  9. A Universal Model for the Normative Evaluation of Internet Information.

    NARCIS (Netherlands)

    Spence, E.H.

    2009-01-01

    Beginning with the initial premise that as the Internet has a global character, the paper will argue that the normative evaluation of digital information on the Internet necessitates an evaluative model that is itself universal and global in character (I agree, therefore, with Gorniak- Kocikowska’s

  10. DETRA: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-01-01

    The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs

  11. A Convergent Participation Model for Evaluation of Learning Objects

    Directory of Open Access Journals (Sweden)

    John Nesbit

    2002-10-01

    Full Text Available The properties that distinguish learning objects from other forms of educational software - global accessibility, metadata standards, finer granularity and reusability - have implications for evaluation. This article proposes a convergent participation model for learning object evaluation in which representatives from stakeholder groups (e.g., students, instructors, subject matter experts, instructional designers, and media developers converge toward more similar descriptions and ratings through a two-stage process supported by online collaboration tools. The article reviews evaluation models that have been applied to educational software and media, considers models for gathering and meta-evaluating individual user reviews that have recently emerged on the Web, and describes the peer review model adopted for the MERLOT repository. The convergent participation model is assessed in relation to other models and with respect to its support for eight goals of learning object evaluation: (1 aid for searching and selecting, (2 guidance for use, (3 formative evaluation, (4 influence on design practices, (5 professional development and student learning, (6 community building, (7 social recognition, and (8 economic exchange.

  12. Maintenance personnel performance simulation (MAPPS) model: overview and evaluation efforts

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.; Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Ryan, T.G.

    1984-01-01

    The development of the MAPPS model has been completed and the model is currently undergoing evaluation. These efforts are addressing a number of identified issues concerning practicality, acceptability, usefulness, and validity. Preliminary analysis of the evaluation data that has been collected indicates that MAPPS will provide comprehensive and reliable data for PRA purposes and for a number of other applications. The MAPPS computer simulation model provides the user with a sophisticated tool for gaining insights into tasks performed by NPP maintenance personnel. Its wide variety of input parameters and output data makes it extremely flexible for application to a number of diverse applications. With the demonstration of favorable model evaluation results, the MAPPS model will represent a valuable source of NPP maintainer reliability data and provide PRA studies with a source of data on maintainers that has previously not existed

  13. A Linguistic Multigranular Sensory Evaluation Model for Olive Oil

    Directory of Open Access Journals (Sweden)

    Luis Martinez

    2008-06-01

    Full Text Available Evaluation is a process that analyzes elements in order to achieve different objectives such as quality inspection, marketing and other fields in industrial companies. This paper focuses on sensory evaluation where the evaluated items are assessed by a panel of experts according to the knowledge acquired via human senses. In these evaluation processes the information provided by the experts implies uncertainty, vagueness and imprecision. The use of the Fuzzy Linguistic Approach 32 has provided successful results modelling such a type of information. In sensory evaluation it may happen that the panel of experts have more or less degree knowledge of about the evaluated items or indicators. So, it seems suitable that each expert could express their preferences in different linguistic term sets based on their own knowledge. In this paper, we present a sensory evaluation model that manages multigranular linguistic evaluation framework based on a decision analysis scheme. This model will be applied to the sensory evaluation process of Olive Oil.

  14. LINDOZ model for Finland environment: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Galeriu, D.; Apostoaie, A.I.; Mocanu, N.; Paunescu, N.

    1996-01-01

    LINDOZ model was developed as a realistic assessment tool for radioactive contamination of the environment. It was designed to produce estimates for the concentration of the pollutant in different compartments of the terrestrial ecosystem (soil, vegetation, animal tissue, and animal products), and to evaluate human exposure to the contaminant (concentration in whole human body, and dose to humans) from inhalation, ingestion and external irradiation. The user can apply LINDOZ for both routine and accidental type of releases. 2 figs, 2 tabs

  15. Modeling atmospheric dispersion for reactor accident consequence evaluation

    International Nuclear Information System (INIS)

    Alpert, D.J.; Gudiksen, P.H.; Woodard, K.

    1982-01-01

    Atmospheric dispersion models are a central part of computer codes for the evaluation of potential reactor accident consequences. A variety of ways of treating to varying degrees the many physical processes that can have an impact on the predicted consequences exists. The currently available models are reviewed and their capabilities and limitations, as applied to reactor accident consequence analyses, are discussed

  16. Multi-criteria comparative evaluation of spallation reaction models

    Science.gov (United States)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  17. A Novel Model for Security Evaluation for Compliance

    DEFF Research Database (Denmark)

    Hald, Sara Ligaard; Pedersen, Jens Myrup; Prasad, Neeli R.

    2011-01-01

    for Compliance (SEC) model offers a lightweight alternative for use by decision makers to get a quick overview of the security attributes of different technologies for easy comparison and requirement compliance evaluation. The scientific contribution is this new approach to security modelling as well...

  18. Evaluating energy efficiency policies with energy-economy models

    NARCIS (Netherlands)

    Mundaca, L.; Neij, L.; Worrell, E.; McNeil, M.

    2010-01-01

    The growing complexities of energy systems, environmental problems, and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically

  19. The fence experiment - a first evaluation of shelter models

    DEFF Research Database (Denmark)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide

    2016-01-01

    We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direct...

  20. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  1. Boussinesq Modeling of Wave Propagation and Runup over Fringing Coral Reefs, Model Evaluation Report

    National Research Council Canada - National Science Library

    Demirbilek, Zeki; Nwogu, Okey G

    2007-01-01

    ..., for waves propagating over fringing reefs. The model evaluation had two goals: (a) investigate differences between laboratory and field characteristics of wave transformation processes over reefs, and (b...

  2. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  3. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    Science.gov (United States)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  4. A smart growth evaluation model based on data envelopment analysis

    Science.gov (United States)

    Zhang, Xiaokun; Guan, Yongyi

    2018-04-01

    With the rapid spread of urbanization, smart growth (SG) has attracted plenty of attention from all over the world. In this paper, by the establishment of index system for smart growth, data envelopment analysis (DEA) model was suggested to evaluate the SG level of the current growth situation in cities. In order to further improve the information of both radial direction and non-radial detection, we introduced the non-Archimedean infinitesimal to form C2GS2 control model. Finally, we evaluated the SG level in Canberra and identified a series of problems, which can verify the applicability of the model and provide us more improvement information.

  5. Literature Review on Modeling Cyber Networks and Evaluating Cyber Risks.

    Energy Technology Data Exchange (ETDEWEB)

    Kelic, Andjelka; Campbell, Philip L

    2018-04-01

    The National Infrastructure Simulations and Analysis Center (NISAC) conducted a literature review on modeling cyber networks and evaluating cyber risks. The literature review explores where modeling is used in the cyber regime and ways that consequence and risk are evaluated. The relevant literature clusters in three different spaces: network security, cyber-physical, and mission assurance. In all approaches, some form of modeling is utilized at varying levels of detail, while the ability to understand consequence varies, as do interpretations of risk. This document summarizes the different literature viewpoints and explores their applicability to securing enterprise networks.

  6. Anticonvulsive evaluation of THIP in the murine pentylenetetrazole kindling model

    DEFF Research Database (Denmark)

    Simonsen, Charlotte; Boddum, Kim; von Schoubye, Nadia L

    2017-01-01

    . Evaluation of THIP as a potential anticonvulsant has given contradictory results in different animal models and for this reason, we reevaluated the anticonvulsive properties of THIP in the murine pentylenetetrazole (PTZ) kindling model. As loss of δ-GABAA R in the dentate gyrus has been associated...... the observed upregulation of δ-GABAA Rs. Even in the demonstrated presence of functional δ-GABAA Rs, THIP (0.5-4 mg/kg) showed no anticonvulsive effect in the PTZ kindling model using a comprehensive in vivo evaluation of the anticonvulsive properties....

  7. Evaluating significance in linear mixed-effects models in R.

    Science.gov (United States)

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  8. Application of Learning Curves for Didactic Model Evaluation: Case Studies

    Directory of Open Access Journals (Sweden)

    Felix Mödritscher

    2013-01-01

    Full Text Available The success of (online courses depends, among other factors, on the underlying didactical models which have always been evaluated with qualitative and quantitative research methods. Several new evaluation techniques have been developed and established in the last years. One of them is ‘learning curves’, which aim at measuring error rates of users when they interact with adaptive educational systems, thereby enabling the underlying models to be evaluated and improved. In this paper, we report how we have applied this new method to two case studies to show that learning curves are useful to evaluate didactical models and their implementation in educational platforms. Results show that the error rates follow a power law distribution with each additional attempt if the didactical model of an instructional unit is valid. Furthermore, the initial error rate, the slope of the curve and the goodness of fit of the curve are valid indicators for the difficulty level of a course and the quality of its didactical model. As a conclusion, the idea of applying learning curves for evaluating didactical model on the basis of usage data is considered to be valuable for supporting teachers and learning content providers in improving their online courses.

  9. Integrating Usability Evaluation into Model-Driven Video Game Development

    OpenAIRE

    Fernandez , Adrian; Insfran , Emilio; Abrahão , Silvia; Carsí , José ,; Montero , Emanuel

    2012-01-01

    Part 3: Short Papers; International audience; The increasing complexity of video game development highlights the need of design and evaluation methods for enhancing quality and reducing time and cost. In this context, Model-Driven Development approaches seem to be very promising since a video game can be obtained by transforming platform-independent models into platform-specific models that can be in turn transformed into code. Although this approach is started to being used for video game de...

  10. Animal models for evaluation of oral delivery of biopharmaceuticals

    DEFF Research Database (Denmark)

    Harloff-Helleberg, Stine; Nielsen, Line Hagner; Nielsen, Hanne Mørck

    2017-01-01

    of systems for oral delivery of biopharmaceuticals may result in new treatment modalities to increase the patient compliance and reduce product cost. In the preclinical development phase, use of experimental animal models is essential for evaluation of new formulation designs. In general, the limited oral...... bioavailability of biopharmaceuticals, of just a few percent, is expected, and therefore, the animal models and the experimental settings must be chosen with utmost care. More knowledge and focus on this topic is highly needed, despite experience from the numerous studies evaluating animal models for oral drug...... delivery of small molecule drugs. This review highlights and discusses pros and cons of the most currently used animal models and settings. Additionally, it also looks into the influence of anesthetics and sampling methods for evaluation of drug delivery systems for oral delivery of biopharmaceuticals...

  11. Evaluating performances of simplified physically based landslide susceptibility models.

    Science.gov (United States)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  12. Manifestations of nonlinearity in fuel center thermocouple steady-state and transient data: implications for data analysis

    International Nuclear Information System (INIS)

    Lanning, D.D.; Barnes, B.O.; Williford, R.E.

    1979-01-01

    The interpretation and verification of fuel centerline thermocouple data are analyzed. Two new concepts are discussed along with their application to in-reactor data from IFA-432, a heavily instrumented six-rod Halden reactor test assembly sponsored by the Nuclear Regulatory Commission. The main ideas presented in this report are that: it is more useful to plot resistance versus power than simply to plot temperature versus power; and the response of the centerline temperature to a linear power decrease is correlated to the rod's current resistance-vs-power behavior. Thus, the resistance-vs-power measurement can be verified by performing a linear power decrease and by plotting the temperature response

  13. On global and regional spectral evaluation of global geopotential models

    International Nuclear Information System (INIS)

    Ustun, A; Abbak, R A

    2010-01-01

    Spectral evaluation of global geopotential models (GGMs) is necessary to recognize the behaviour of gravity signal and its error recorded in spherical harmonic coefficients and associated standard deviations. Results put forward in this wise explain the whole contribution of gravity data in different kinds that represent various sections of the gravity spectrum. This method is more informative than accuracy assessment methods, which use external data such as GPS-levelling. Comparative spectral evaluation for more than one model can be performed both in global and local sense using many spectral tools. The number of GGMs has grown with the increasing number of data collected by the dedicated satellite gravity missions, CHAMP, GRACE and GOCE. This fact makes it necessary to measure the differences between models and to monitor the improvements in the gravity field recovery. In this paper, some of the satellite-only and combined models are examined in different scales, globally and regionally, in order to observe the advances in the modelling of GGMs and their strengths at various expansion degrees for geodetic and geophysical applications. The validation of the published errors of model coefficients is a part of this evaluation. All spectral tools explicitly reveal the superiority of the GRACE-based models when compared against the models that comprise the conventional satellite tracking data. The disagreement between models is large in local/regional areas if data sets are different, as seen from the example of the Turkish territory

  14. Evaluation of recent quantitative magnetospheric magnetic field models

    International Nuclear Information System (INIS)

    Walker, R.J.

    1976-01-01

    Recent quantitative magnetospheric field models contain many features not found in earlier models. Magnetopause models which include the effects of the dipole tilt were presented. More realistic models of the tail field include tail currents which close on the magnetopause, cross-tail currents of finite thickness, and cross-tail current models which model the position of the neutral sheet as a function of tilt. Finally, models have attempted to calculate the field of currents distributed in the inner magnetosphere. As the purpose of a magnetospheric model is to provide a mathematical description of the field that reasonably reproduces the observed magnetospheric field, several recent models were compared with the observed ΔB(B/sub observed/--B/sub main field/) contours. Models containing only contributions from magnetopause and tail current systems are able to reproduce the observed quiet time field only in an extremely qualitative way. The best quantitative agreement between models and observations occurs when currents distributed in the inner magnetosphere are added to the magnetopause and tail current systems. However, the distributed current models are valid only for zero tilt. Even the models which reproduce the average observed field reasonably well may not give physically reasonable field gradients. Three of the models evaluated contain regions in the near tail in which the field gradient reverses direction. One region in which all the models fall short is that around the polar cusp, though most can be used to calculate the position of the last closed field line reasonably well

  15. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models : Stop Developing, Start Evaluating

    NARCIS (Netherlands)

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-01-01

    Background: Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). Objectives: To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Methods: Four

  16. Airline service quality evaluation: A review on concepts and models

    Directory of Open Access Journals (Sweden)

    Navid Haghighat

    2017-12-01

    Full Text Available This paper reviews different major service quality concept and models which led to great developments in evaluating service quality with focusing on improvement process of the models through discussing criticisms of each model. Criticisms against these models are discussed to clarify development steps of newer models which led to the improvement of airline service quality models. The precise and accurate evaluation of service quality needs utilizing a reliable concept with comprehensive criteria and effective measurement techniques as the fundamentals of a valuable framework. In this paper, service quality models improvement is described based on three major service quality concepts, the disconfirmation, performance and hierarchical concepts which are developed subsequently. Reviewing various criteria and different measurement techniques such a statistical analysis and multi-criteria decision making assist researchers to have a clear understanding of the development of the evaluation framework in the airline industry. This study aims at promoting reliable frameworks for evaluating airline service quality in different countries and societies due to economic, cultural and social aspects of each society.

  17. Classification and moral evaluation of uncertainties in engineering modeling.

    Science.gov (United States)

    Murphy, Colleen; Gardoni, Paolo; Harris, Charles E

    2011-09-01

    Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling.

  18. Evaluation of Artificial Intelligence Based Models for Chemical Biodegradability Prediction

    Directory of Open Access Journals (Sweden)

    Aleksandar Sabljic

    2004-12-01

    Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  19. Evaluation of two ozone air quality modelling systems

    Directory of Open Access Journals (Sweden)

    S. Ortega

    2004-01-01

    Full Text Available The aim of this paper is to compare two different modelling systems and to evaluate their ability to simulate high values of ozone concentration in typical summer episodes which take place in the north of Spain near the metropolitan area of Barcelona. As the focus of the paper is the comparison of the two systems, we do not attempt to improve the agreement by adjusting the emission inventory or model parameters. The first model, or forecasting system, is made up of three modules. The first module is a mesoscale model (MASS. This provides the initial condition for the second module, which is a nonlocal boundary layer model based on the transilient turbulence scheme. The third module is a photochemical box model (OZIPR, which is applied in Eulerian and Lagrangian modes and receives suitable information from the two previous modules. The model forecast is evaluated against ground base stations during summer 2001. The second model is the MM5/UAM-V. This is a grid model designed to predict the hourly three-dimensional ozone concentration fields. The model is applied during an ozone episode that occurred between 21 and 23 June 2001. Our results reflect the good performance of the two modelling systems when they are used in a specific episode.

  20. Evaluation of potential crushed-salt constitutive models

    International Nuclear Information System (INIS)

    Callahan, G.D.; Loken, M.C.; Sambeek, L.L. Van; Chen, R.; Pfeifle, T.W.; Nieland, J.D.; Hansen, F.D.

    1995-12-01

    Constitutive models describing the deformation of crushed salt are presented in this report. Ten constitutive models with potential to describe the phenomenological and micromechanical processes for crushed salt were selected from a literature search. Three of these ten constitutive models, termed Sjaardema-Krieg, Zeuch, and Spiers models, were adopted as candidate constitutive models. The candidate constitutive models were generalized in a consistent manner to three-dimensional states of stress and modified to include the effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt was used to determine material parameters for the candidate constitutive models. Nonlinear least-squares model fitting to data from the hydrostatic consolidation tests, the shear consolidation tests, and a combination of the shear and hydrostatic tests produces three sets of material parameter values for the candidate models. The change in material parameter values from test group to test group indicates the empirical nature of the models. To evaluate the predictive capability of the candidate models, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the models to predict the test data, the Spiers model appeared to perform slightly better than the other two candidate models. The work reported here is a first-of-its kind evaluation of constitutive models for reconsolidation of crushed salt. Questions remain to be answered. Deficiencies in models and databases are identified and recommendations for future work are made. 85 refs

  1. An Efficient Dynamic Trust Evaluation Model for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhengwang Ye

    2017-01-01

    Full Text Available Trust evaluation is an effective method to detect malicious nodes and ensure security in wireless sensor networks (WSNs. In this paper, an efficient dynamic trust evaluation model (DTEM for WSNs is proposed, which implements accurate, efficient, and dynamic trust evaluation by dynamically adjusting the weights of direct trust and indirect trust and the parameters of the update mechanism. To achieve accurate trust evaluation, the direct trust is calculated considering multitrust including communication trust, data trust, and energy trust with the punishment factor and regulating function. The indirect trust is evaluated conditionally by the trusted recommendations from a third party. Moreover, the integrated trust is measured by assigning dynamic weights for direct trust and indirect trust and combining them. Finally, we propose an update mechanism by a sliding window based on induced ordered weighted averaging operator to enhance flexibility. We can dynamically adapt the parameters and the interactive history windows number according to the actual needs of the network to realize dynamic update of direct trust value. Simulation results indicate that the proposed dynamic trust model is an efficient dynamic and attack-resistant trust evaluation model. Compared with existing approaches, the proposed dynamic trust model performs better in defending multiple malicious attacks.

  2. A model to evaluate quality and effectiveness of disease management.

    Science.gov (United States)

    Lemmens, K M M; Nieboer, A P; van Schayck, C P; Asin, J D; Huijsman, R

    2008-12-01

    Disease management has emerged as a new strategy to enhance quality of care for patients suffering from chronic conditions, and to control healthcare costs. So far, however, the effects of this strategy remain unclear. Although current models define the concept of disease management, they do not provide a systematic development or an explanatory theory of how disease management affects the outcomes of care. The objective of this paper is to present a framework for valid evaluation of disease-management initiatives. The evaluation model is built on two pillars of disease management: patient-related and professional-directed interventions. The effectiveness of these interventions is thought to be affected by the organisational design of the healthcare system. Disease management requires a multifaceted approach; hence disease-management programme evaluations should focus on the effects of multiple interventions, namely patient-related, professional-directed and organisational interventions. The framework has been built upon the conceptualisation of these disease-management interventions. Analysis of the underlying mechanisms of these interventions revealed that learning and behavioural theories support the core assumptions of disease management. The evaluation model can be used to identify the components of disease-management programmes and the mechanisms behind them, making valid comparison feasible. In addition, this model links the programme interventions to indicators that can be used to evaluate the disease-management programme. Consistent use of this framework will enable comparisons among disease-management programmes and outcomes in evaluation research.

  3. Evaluation of Cost Models and Needs & Gaps Analysis

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad

    2014-01-01

    they breakdown costs. This is followed by an in depth analysis of stakeholders’ needs for financial information derived from the 4C project stakeholder consultation.The stakeholders’ needs analysis indicated that models should:• support accounting, but more importantly they should enable budgeting• be able......his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...

  4. Using measurements for evaluation of black carbon modeling

    Directory of Open Access Journals (Sweden)

    S. Gilardoni

    2011-01-01

    Full Text Available The ever increasing use of air quality and climate model assessments to underpin economic, public health, and environmental policy decisions makes effective model evaluation critical. This paper discusses the properties of black carbon and light attenuation and absorption observations that are the key to a reliable evaluation of black carbon model and compares parametric and nonparametric statistical tools for the quantification of the agreement between models and observations. Black carbon concentrations are simulated with TM5/M7 global model from July 2002 to June 2003 at four remote sites (Alert, Jungfraujoch, Mace Head, and Trinidad Head and two regional background sites (Bondville and Ispra. Equivalent black carbon (EBC concentrations are calculated using light attenuation measurements from January 2000 to December 2005. Seasonal trends in the measurements are determined by fitting sinusoidal functions and the representativeness of the period simulated by the model is verified based on the scatter of the experimental values relative to the fit curves. When the resolution of the model grid is larger than 1° × 1°, it is recommended to verify that the measurement site is representative of the grid cell. For this purpose, equivalent black carbon measurements at Alert, Bondville and Trinidad Head are compared to light absorption and elemental carbon measurements performed at different sites inside the same model grid cells. Comparison of these equivalent black carbon and elemental carbon measurements indicates that uncertainties in black carbon optical properties can compromise the comparison between model and observations. During model evaluation it is important to examine the extent to which a model is able to simulate the variability in the observations over different integration periods as this will help to identify the most appropriate timescales. The agreement between model and observation is accurately described by the overlap of

  5. Evaluation of multivariate calibration models transferred between spectroscopic instruments

    DEFF Research Database (Denmark)

    Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas

    2016-01-01

    In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...

  6. Designing the model for evaluating business quality in Croatia

    Directory of Open Access Journals (Sweden)

    Ana Ježovita

    2015-01-01

    Full Text Available The main objective of the paper includes designing a model for evaluating the financial quality of business operations. In that context, for the paper purposes, the financial quality of business operations is defined as an ability to achieve adequate value of individual financial ratios for financial position and performance evaluation. The objective of the model is to obtain comprehensive conclusion about the financial quality of business operation using only value of the function. Data used for designing the model is limited to financial data available from the annual balance sheet and income statement. Those limitations offer the opportunity for all sizes of companies from the non-financial business economy sector to use the designed model for evaluation purposes. Statistical methods used for designing the model are multivariate discriminant analysis and logistic regression. Discriminant analysis resulted in the function which includes five individual financial ratios with the best discriminant power. Respecting the results obtained in the classification matrix with classification accuracy of 95.92% by the original sample, or accuracy of 96.06% for the independent sample, it can be concluded that it is possible to evaluate the financial quality of business operations of companies in Croatia by using the model composed of individual financial ratios. Conducted logistic regression confirms the results obtained using discriminant analysis.

  7. Evaluation and comparison of models and modelling tools simulating nitrogen processes in treatment wetlands

    DEFF Research Database (Denmark)

    Edelfeldt, Stina; Fritzson, Peter

    2008-01-01

    with Modelica 2.1 (Wiley-IEEE Press, USA, 2004).] and an associated tool. The differences and similarities between the MathModelica Model Editor and three other ecological modelling tools have also been evaluated. The results show that the models can well be modelled and simulated in the MathModelica Model...... Editor, and that nitrogen decrease in a constructed treatment wetland should be described and simulated using the Nitrification/Denitrification model as this model has the highest overall quality score and provides a more variable environment.......In this paper, two ecological models of nitrogen processes in treatment wetlands have been evaluated and compared. These models were implemented, simulated, and visualized using the Modelica modelling and simulation language [P. Fritzson, Principles of Object-Oriented Modelling and Simulation...

  8. A Fuzzy Comprehensive Evaluation Model for Sustainability Risk Evaluation of PPP Projects

    Directory of Open Access Journals (Sweden)

    Libiao Bai

    2017-10-01

    Full Text Available Evaluating the sustainability risk level of public–private partnership (PPP projects can reduce project risk incidents and achieve the sustainable development of the organization. However, the existing studies about PPP projects risk management mainly focus on exploring the impact of financial and revenue risks but ignore the sustainability risks, causing the concept of “sustainability” to be missing while evaluating the risk level of PPP projects. To evaluate the sustainability risk level and achieve the most important objective of providing a reference for the public and private sectors when making decisions on PPP project management, this paper constructs a factor system of sustainability risk of PPP projects based on an extensive literature review and develops a mathematical model based on the methods of fuzzy comprehensive evaluation model (FCEM and failure mode, effects and criticality analysis (FMECA for evaluating the sustainability risk level of PPP projects. In addition, this paper conducts computational experiment based on a questionnaire survey to verify the effectiveness and feasibility of this proposed model. The results suggest that this model is reasonable for evaluating the sustainability risk level of PPP projects. To our knowledge, this paper is the first study to evaluate the sustainability risk of PPP projects, which would not only enrich the theories of project risk management, but also serve as a reference for the public and private sectors for the sustainable planning and development. Keywords: sustainability risk eva

  9. Adapting Evaluations of Alternative Payment Models to a Changing Environment.

    Science.gov (United States)

    Grannemann, Thomas W; Brown, Randall S

    2018-04-01

    To identify the most robust methods for evaluating alternative payment models (APMs) in the emerging health care delivery system environment. We assess the impact of widespread testing of alternative payment models on the ability to find credible comparison groups. We consider the applicability of factorial research designs for assessing the effects of these models. The widespread adoption of alternative payment models could effectively eliminate the possibility of comparing APM results with a "pure" control or comparison group unaffected by other interventions. In this new environment, factorial experiments have distinct advantages over the single-model experimental or quasi-experimental designs that have been the mainstay of recent tests of Medicare payment and delivery models. The best prospects for producing definitive evidence of the effects of payment incentives for APMs include fractional factorial experiments that systematically vary requirements and payment provisions within a payment model. © Health Research and Educational Trust.

  10. QUALITY SERVICES EVALUATION MODEL BASED ON DEDICATED SOFTWARE TOOL

    Directory of Open Access Journals (Sweden)

    ANDREEA CRISTINA IONICĂ

    2012-10-01

    Full Text Available In this paper we introduced a new model, called Service Quality (SQ, which combines QFD and SERVQUAL methods. This model takes from the SERVQUAL method the five dimensions of requirements and three of characteristics and from the QFD method the application methodology. The originality of the SQ model consists in computing a global index that reflects the customers’ requirements accomplishment level by the quality characteristics. In order to prove the viability of the SQ model, there was developed a software tool that was applied for the evaluation of a health care services provider.

  11. Analysis and Evaluation of Statistical Models for Integrated Circuits Design

    Directory of Open Access Journals (Sweden)

    Sáenz-Noval J.J.

    2011-10-01

    Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.

  12. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  13. Evaluation-Function-based Model-free Adaptive Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Agus Naba

    2016-12-01

    Full Text Available Designs of adaptive fuzzy controllers (AFC are commonly based on the Lyapunov approach, which requires a known model of the controlled plant. They need to consider a Lyapunov function candidate as an evaluation function to be minimized. In this study these drawbacks were handled by designing a model-free adaptive fuzzy controller (MFAFC using an approximate evaluation function defined in terms of the current state, the next state, and the control action. MFAFC considers the approximate evaluation function as an evaluative control performance measure similar to the state-action value function in reinforcement learning. The simulation results of applying MFAFC to the inverted pendulum benchmark verified the proposed scheme’s efficacy.

  14. Presenting an evaluation model of the trauma registry software.

    Science.gov (United States)

    Asadi, Farkhondeh; Paydar, Somayeh

    2018-04-01

    Trauma is a major cause of 10% death in the worldwide and is considered as a global concern. This problem has made healthcare policy makers and managers to adopt a basic strategy in this context. Trauma registry has an important and basic role in decreasing the mortality and the disabilities due to injuries resulted from trauma. Today, different software are designed for trauma registry. Evaluation of this software improves management, increases efficiency and effectiveness of these systems. Therefore, the aim of this study is to present an evaluation model for trauma registry software. The present study is an applied research. In this study, general and specific criteria of trauma registry software were identified by reviewing literature including books, articles, scientific documents, valid websites and related software in this domain. According to general and specific criteria and related software, a model for evaluating trauma registry software was proposed. Based on the proposed model, a checklist designed and its validity and reliability evaluated. Mentioned model by using of the Delphi technique presented to 12 experts and specialists. To analyze the results, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved by the experts and professionals, the final version of the evaluation model for the trauma registry software was presented. For evaluating of criteria of trauma registry software, two groups were presented: 1- General criteria, 2- Specific criteria. General criteria of trauma registry software were classified into four main categories including: 1- usability, 2- security, 3- maintainability, and 4-interoperability. Specific criteria were divided into four main categories including: 1- data submission and entry, 2- reporting, 3- quality control, 4- decision and research support. The presented model in this research has introduced important general and specific criteria of trauma registry software

  15. The Combined Use of a Gas-Controlled Heat Pipe and a Copper Point to Improve the Calibration of Thermocouples up to 1100 ˚C

    Science.gov (United States)

    Astrua, M.; Iacomini, L.; Battuello, M.

    2008-10-01

    The calibration of platinum-based thermocouples from 420 °C to 1,100 ˚C is currently carried out at INRIM making use of two different apparatus: for temperatures below 930 ˚C, a potassium gas-controlled heat pipe (GCHP) is used, whereas a metal-block furnace is adopted for higher temperatures. The standard uncertainty of the reference temperature obtained in the lower temperature range is almost one order of magnitude better than in the higher temperature range. A sealed copper cell was investigated to see if it could be used to calibrate thermocouples above 930 ˚C with a lower uncertainty than our current procedures allowed. The cell was characterized with Type S and Pt/Pd thermocouples and with an HTPRT. The freezing plateaux were flat within 0.01 ˚C and lasted up to 1 h with a repeatability of 0.02 ˚C. The temperature of the cell was determined with a standard uncertainty of 0.04 ˚C. Hence, the copper cell was found to be superior to the comparator furnace for the calibration of platinum-based thermocouples because of the significant decrease in the uncertainty that it provides. An analysis was also carried out on the calibration of Pt/Pd thermocouples, and it was found that the combined use of the potassium GCHP and the Cu fixed-point cell is adequate to exploit the potential of these sensors in the range from 420 °C to 1,084 °C. A comparison with a fixed-point calibration was also made which gave rise to agreement within 0.07 ˚C between the two approaches.

  16. The model of evaluation of innovative potential of enterprise

    Directory of Open Access Journals (Sweden)

    Ганна Ігорівна Заднєпровська

    2015-06-01

    Full Text Available The basic components of the enterprise’s innovative potential evaluation process are investigated. It is offered the conceptual model of evaluation of the innovative potential that includes: subjects, objects, purpose, provision of information, principles, methods, criteria, indicators. It is noted that the innovative capacity characterizes the transition from the current to the strategic level of innovation potential and, thus, characterizes the composition of objects from position of user

  17. Using modeling to develop and evaluate a corrective action system

    International Nuclear Information System (INIS)

    Rodgers, L.

    1995-01-01

    At a former trucking facility in EPA Region 4, a corrective action system was installed to remediate groundwater and soil contaminated with gasoline and fuel oil products released from several underground storage tanks (USTs). Groundwater modeling was used to develop the corrective action plan and later used with soil vapor modeling to evaluate the systems effectiveness. Groundwater modeling was used to determine the effects of a groundwater recovery system on the water table at the site. Information gathered during the assessment phase was used to develop a three dimensional depiction of the subsurface at the site. Different groundwater recovery schemes were then modeled to determine the most effective method for recovering contaminated groundwater. Based on the modeling and calculations, a corrective action system combining soil vapor extraction (SVE) and groundwater recovery was designed. The system included seven recovery wells, to extract both soil vapor and groundwater, and a groundwater treatment system. Operation and maintenance of the system included monthly system sampling and inspections and quarterly groundwater sampling. After one year of operation the effectiveness of the system was evaluated. A subsurface soil gas model was used to evaluate the effects of the SVE system on the site contamination as well as its effects on the water table and groundwater recovery operations. Groundwater modeling was used in evaluating the effectiveness of the groundwater recovery system. Plume migration and capture were modeled to insure that the groundwater recovery system at the site was effectively capturing the contaminant plume. The two models were then combined to determine the effects of the two systems, acting together, on the remediation process

  18. Pipe fracture evaluations for leak-rate detection: Probabilistic models

    International Nuclear Information System (INIS)

    Rahman, S.; Wilkowski, G.; Ghadiali, N.

    1993-01-01

    This is the second in series of three papers generated from studies on nuclear pipe fracture evaluations for leak-rate detection. This paper focuses on the development of novel probabilistic models for stochastic performance evaluation of degraded nuclear piping systems. It was accomplished here in three distinct stages. First, a statistical analysis was conducted to characterize various input variables for thermo-hydraulic analysis and elastic-plastic fracture mechanics, such as material properties of pipe, crack morphology variables, and location of cracks found in nuclear piping. Second, a new stochastic model was developed to evaluate performance of degraded piping systems. It is based on accurate deterministic models for thermo-hydraulic and fracture mechanics analyses described in the first paper, statistical characterization of various input variables, and state-of-the-art methods of modem structural reliability theory. From this model. the conditional probability of failure as a function of leak-rate detection capability of the piping systems can be predicted. Third, a numerical example was presented to illustrate the proposed model for piping reliability analyses. Results clearly showed that the model provides satisfactory estimates of conditional failure probability with much less computational effort when compared with those obtained from Monte Carlo simulation. The probabilistic model developed in this paper will be applied to various piping in boiling water reactor and pressurized water reactor plants for leak-rate detection applications

  19. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  20. Popularity Evaluation Model for Microbloggers Online Social Network

    Directory of Open Access Journals (Sweden)

    Xia Zhang

    2014-01-01

    Full Text Available Recently, microblogging is widely studied by the researchers in the domain of the online social network (OSN. How to evaluate the popularities of microblogging users is an important research field, which can be applied to commercial advertising, user behavior analysis and information dissemination, and so forth. Previous studies on the evaluation methods cannot effectively solve and accurately evaluate the popularities of the microbloggers. In this paper, we proposed an electromagnetic field theory based model to analyze the popularities of microbloggers. The concept of the source in microblogging field is first put forward, which is based on the concept of source in the electromagnetic field; then, one’s microblogging flux is calculated according to his/her behaviors (send or receive feedbacks on the microblogging platform; finally, we used three methods to calculate one’s microblogging flux density, which can represent one’s popularity on the microblogging platform. In the experimental work, we evaluated our model using real microblogging data and selected the best one from the three popularity measure methods. We also compared our model with the classic PageRank algorithm; and the results show that our model is more effective and accurate to evaluate the popularities of the microbloggers.

  1. Evaluation of some infiltration models and hydraulic parameters

    International Nuclear Information System (INIS)

    Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.

    2010-01-01

    The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.

  2. Mathematical models and lymphatic filariasis control: monitoring and evaluating interventions.

    Science.gov (United States)

    Michael, Edwin; Malecela-Lazaro, Mwele N; Maegga, Bertha T A; Fischer, Peter; Kazura, James W

    2006-11-01

    Monitoring and evaluation are crucially important to the scientific management of any mass parasite control programme. Monitoring enables the effectiveness of implemented actions to be assessed and necessary adaptations to be identified; it also determines when management objectives are achieved. Parasite transmission models can provide a scientific template for informing the optimal design of such monitoring programmes. Here, we illustrate the usefulness of using a model-based approach for monitoring and evaluating anti-parasite interventions and discuss issues that need addressing. We focus on the use of such an approach for the control and/or elimination of the vector-borne parasitic disease, lymphatic filariasis.

  3. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  5. METRIC EVALUATION PIPELINE FOR 3D MODELING OF URBAN SCENES

    Directory of Open Access Journals (Sweden)

    M. Bosch

    2017-05-01

    Full Text Available Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  6. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    Science.gov (United States)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  7. An effective quality model for evaluating mobile websites

    International Nuclear Information System (INIS)

    Hassan, W.U.; Nawaz, M.T.; Syed, T.H.; Naseem, A.

    2015-01-01

    The Evolution in Web development in recent years has caused emergence of new area of mobile computing, Mobile phone has been transformed into high speed processing device capable of doing the processes which were suppose to be run only on computer previously, Modem mobile phones now have capability to process data with greater speed then desktop systems and with the inclusion of 3G and 4G networks, mobile became the prime choice for users to send and receive data from any device. As a result, there is a major increase in mobile website need and development but due to uniqueness of mobile website usage as compared to desktop website, there is a need to focus on quality aspect of mobile website, So, to increase and preserve quality of mobile website, a quality model is required which has to be designed specifically to evaluate mobile website quality, To design a mobile website quality model, a survey based methodology is used to gather the information regarding website unique usage in mobile from different users. On the basis of this information, a mobile website quality model is presented which aims to evaluate the quality of mobile websites. In proposed model, some sub characteristics are designed to evaluate mobile websites in particular. The result is a proposed model aims to evaluate features of website which are important in context of its deployment and its usability in mobile platform. (author)

  8. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    International Nuclear Information System (INIS)

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-01-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a ''living document'' that will be modified over the course of the execution of this work

  9. A formative model for student nurse development and evaluation

    Directory of Open Access Journals (Sweden)

    A. S. van der Merwe

    1996-03-01

    Full Text Available Preparing student nurses for the profession is a complex task for nurse educators; especially when dealing with the development of personal and interpersonal skills, qualities and values held in high esteem by the nursing profession and the community they serve. These researchers developed a model for formative evaluation of students by using the principles of inductive and deductive reasoning. This model was implemented in clinical practice situations and evaluated for its usefulness. It seems that the model enhanced the standards of nursing care because it had a positive effect on the behaviour of students and they were better motivated; the model also improved interpersonal relationships and communication between practising nurses and students.

  10. Evaluation process radiological in ternopil region method of box models

    Directory of Open Access Journals (Sweden)

    І.В. Матвєєва

    2006-02-01

    Full Text Available  Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.

  11. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  12. Evaluating the AS-level Internet models: beyond topological characteristics

    International Nuclear Information System (INIS)

    Fan Zheng-Ping

    2012-01-01

    A surge number of models has been proposed to model the Internet in the past decades. However, the issue on which models are better to model the Internet has still remained a problem. By analysing the evolving dynamics of the Internet, we suggest that at the autonomous system (AS) level, a suitable Internet model, should at least be heterogeneous and have a linearly growing mechanism. More importantly, we show that the roles of topological characteristics in evaluating and differentiating Internet models are apparently over-estimated from an engineering perspective. Also, we find that an assortative network is not necessarily more robust than a disassortative network and that a smaller average shortest path length does not necessarily mean a higher robustness, which is different from the previous observations. Our analytic results are helpful not only for the Internet, but also for other general complex networks. (interdisciplinary physics and related areas of science and technology)

  13. Use of field experimental studies to evaluate emergency response models

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Lange, R.; Rodriguez, D.J.; Nasstrom, J.S.

    1985-01-01

    The three-dimensional diagnostic wind field model (MATHEW) and the particle-in-cell atmospheric transport and diffusion model (ADPIC) are used by the Atmospheric Release Advisory Capability to estimate the environmental consequences of accidental releases of radioactivity into the atmosphere. These models have undergone extensive evaluations against field experiments conducted in a variety of environmental settings ranging from relatively flat to very complex terrain areas. Simulations of tracer experiments conducted in a complex mountain valley setting revealed that 35 to 50% of the comparisons between calculated and measured tracer concentrations were within a factor of 5. This may be compared with a factor of 2 for 50% of the comparisons for relatively flat terrain. This degradation of results in complex terrain is due to a variety of factors such as the limited representativeness of measurements in complex terrain, the limited spatial resolution afforded by the models, and the turbulence parameterization based on sigma/sub theta/ measurements to evaluate the eddy diffusivities. Measurements of sigma/sub theta/ in complex terrain exceed those measured over flat terrain by a factor of 2 to 3 leading to eddy diffusivities that are unrealistically high. The results of model evaluations are very sensitive to the quality and the representativeness of the meteorological data. This is particularly true for measurements near the source. The capability of the models to simulate the dispersion of an instantaneously produced cloud of particulates was illustrated to be generally within a factor of 2 over flat terrain. 19 refs., 16 figs

  14. Cleanliness Policy Implementation: Evaluating Retribution Model to Rise Public Satisfaction

    Science.gov (United States)

    Dailiati, Surya; Hernimawati; Prihati; Chintia Utami, Bunga

    2018-05-01

    This research is based on the principal issues concerning the evaluation of cleanliness retribution policy which has not been optimally be able to improve the Local Revenue of Pekanbaru City and has not improved the cleanliness of Pekanbaru City. It was estimated to be caused by the performance of Garden and Sanitation Department are not in accordance with the requirement of society of Pekanbaru City. The research method used in this study is a mixed method with sequential exploratory strategy. The data collection used are observation, interview and documentation for qualitative research as well as questionnaires for quantitative research. The collected data were analyzed with interactive model of Miles and Huberman for qualitative research and multiple regression analysis for quantitative research. The research result indicated that the model of cleanliness policy implementation that can increase of PAD Pekanbaru City and be able to improve people’s satisfaction divided into two (2) which are the evaluation model and the society satisfaction model. The evaluation model influence by criteria/variable of effectiveness, efficiency, adequacy, equity, responsiveness, and appropriateness, while the society satisfaction model influence by variables of society satisfaction, intentions, goals, plans, programs, and appropriateness of cleanliness retribution collection policy.

  15. Tropical convection regimes in climate models: evaluation with satellite observations

    Science.gov (United States)

    Steiner, Andrea K.; Lackner, Bettina C.; Ringer, Mark A.

    2018-04-01

    High-quality observations are powerful tools for the evaluation of climate models towards improvement and reduction of uncertainty. Particularly at low latitudes, the most uncertain aspect lies in the representation of moist convection and interaction with dynamics, where rising motion is tied to deep convection and sinking motion to dry regimes. Since humidity is closely coupled with temperature feedbacks in the tropical troposphere, a proper representation of this region is essential. Here we demonstrate the evaluation of atmospheric climate models with satellite-based observations from Global Positioning System (GPS) radio occultation (RO), which feature high vertical resolution and accuracy in the troposphere to lower stratosphere. We focus on the representation of the vertical atmospheric structure in tropical convection regimes, defined by high updraft velocity over warm surfaces, and investigate atmospheric temperature and humidity profiles. Results reveal that some models do not fully capture convection regions, particularly over land, and only partly represent strong vertical wind classes. Models show large biases in tropical mean temperature of more than 4 K in the tropopause region and the lower stratosphere. Reasonable agreement with observations is given in mean specific humidity in the lower to mid-troposphere. In moist convection regions, models tend to underestimate moisture by 10 to 40 % over oceans, whereas in dry downdraft regions they overestimate moisture by 100 %. Our findings provide evidence that RO observations are a unique source of information, with a range of further atmospheric variables to be exploited, for the evaluation and advancement of next-generation climate models.

  16. EVALUATION OF RAINFALL-RUNOFF MODELS FOR MEDITERRANEAN SUBCATCHMENTS

    Directory of Open Access Journals (Sweden)

    A. Cilek

    2016-06-01

    Full Text Available The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA, a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  17. Evaluation of articulation simulation system using artificial maxillectomy models.

    Science.gov (United States)

    Elbashti, M E; Hattori, M; Sumita, Y I; Taniguchi, H

    2015-09-01

    Acoustic evaluation is valuable for guiding the treatment of maxillofacial defects and determining the effectiveness of rehabilitation with an obturator prosthesis. Model simulations are important in terms of pre-surgical planning and pre- and post-operative speech function. This study aimed to evaluate the acoustic characteristics of voice generated by an articulation simulation system using a vocal tract model with or without artificial maxillectomy defects. More specifically, we aimed to establish a speech simulation system for maxillectomy defect models that both surgeons and maxillofacial prosthodontists can use in guiding treatment planning. Artificially simulated maxillectomy defects were prepared according to Aramany's classification (Classes I-VI) in a three-dimensional vocal tract plaster model of a subject uttering the vowel /a/. Formant and nasalance acoustic data were analysed using Computerized Speech Lab and the Nasometer, respectively. Formants and nasalance of simulated /a/ sounds were successfully detected and analysed. Values of Formants 1 and 2 for the non-defect model were 675.43 and 976.64 Hz, respectively. Median values of Formants 1 and 2 for the defect models were 634.36 and 1026.84 Hz, respectively. Nasalance was 11% in the non-defect model, whereas median nasalance was 28% in the defect models. The results suggest that an articulation simulation system can be used to help surgeons and maxillofacial prosthodontists to plan post-surgical defects that will be facilitate maxillofacial rehabilitation. © 2015 John Wiley & Sons Ltd.

  18. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  19. Hydrology model evaluation at the Hanford Nuclear Waste Facility

    International Nuclear Information System (INIS)

    1977-04-01

    One and two-dimensional flow and contaminant transport computer models have been developed at Hanford to assess the rate and direction of contaminant movement from waste disposal sites. The primary objective of this work was to evaluate the potential improvement in accuracy that a three-dimensional model might offer over the simpler one and two-dimensional models. INTERA's hydrology contaminant transport model was used for this evaluation. Although this study was conceptual in nature, an attempt was made to relate it as closely as possible to Hanford conditions. Two-dimensional model runs were performed over the period of 1968 to 1973 using estimates of waste discharge flows, tritium concentrations, vertically averaged values of aquifer properties and boundary conditions. The well test interpretation runs confirmed the applicability of the areal hydraulic conductivity distribution. Velocity fields calculated by the two-dimensional and three-dimensional models and surface concentration profiles calculated by the two-dimensional and three-dimensional models show significant differences. Vertical concentration profiles calculated by a three-dimensional model show better qualitative agreement with the limited observed concentration profile data supplied by ARHCO

  20. Evaluation of burst pressure prediction models for line pipes

    International Nuclear Information System (INIS)

    Zhu, Xian-Kui; Leis, Brian N.

    2012-01-01

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487–492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: ► This paper evaluates different burst pressure prediction models for line pipes. ► The existing models are categorized into two major groups of Tresca and von Mises solutions. ► Prediction quality of each model is assessed statistically using a large full-scale burst test database. ► The Zhu-Leis solution is identified as the best predictive model.

  1. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  2. Obs4MIPS: Satellite Observations for Model Evaluation

    Science.gov (United States)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  3. Mathematical modelling of ultrasonic non-destructive evaluation

    Directory of Open Access Journals (Sweden)

    Larissa Ju Fradkin

    2001-01-01

    Full Text Available High-frequency asymptotics have been used at our Centre to develop codes for modelling pulse propagation and scattering in the near-field of the ultrasonic transducers used in NDE (Non-Destructive Evaluation, particularly of walls of nuclear reactors. The codes are hundreds of times faster than the direct numerical codes but no less accurate.

  4. The use of modeling in the economic evaluation of vaccines

    NARCIS (Netherlands)

    Bos, Jasper M; Alphen, Loek van; Postma, Maarten J

    2002-01-01

    As a consequence of the increased role of pharmacoeconomics in policy-making, economic evaluations are performed at more and more early stages in the development of a therapeutic. This implies the development of models to assess the future impact of an intervention and to account for the level of

  5. Modelling Emotional and Attitudinal Evaluations of Major Sponsors

    DEFF Research Database (Denmark)

    Martensen, Anne; Hansen, Flemming

    2004-01-01

    The paper reports findings from a larger study of sponsors and their relationship to sponsoredparties. In the present reporting, the focus is on sponsors. Rather than evaluating suchsponsorships in traditional effect hierarchical terms, a conceptual Sponsor Value Model isspecified as a structural...

  6. Evaluation of a stratiform cloud parameterization for general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States); McCaa, J. [Univ. of Washington, Seattle, WA (United States)

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  7. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  8. Quantitative Comparison Between Crowd Models for Evacuation Planning and Evaluation

    NARCIS (Netherlands)

    Viswanathan, V.; Lee, C.E.; Lees, M.H.; Cheong, S.A.; Sloot, P.M.A.

    2014-01-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we

  9. Evaluation of models generated via hybrid evolutionary algorithms ...

    African Journals Online (AJOL)

    2016-04-02

    Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-.

  10. Evaluating Modeling Sessions Using the Analytic Hierarchy Process

    NARCIS (Netherlands)

    Ssebuggwawo, D.; Hoppenbrouwers, S.J.B.A.; Proper, H.A.; Persson, A.; Stirna, J.

    2008-01-01

    In this paper, which is methodological in nature, we propose to use an established method from the field of Operations Research, the Analytic Hierarchy Process (AHP), in the integrated, stakeholder- oriented evaluation of enterprise modeling sessions: their language, pro- cess, tool (medium), and

  11. Frontier models for evaluating environmental efficiency: an overview

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Wall, A.

    2014-01-01

    Our aim in this paper is to provide a succinct overview of frontier-based models used to evaluate environmental efficiency, with a special emphasis on agricultural activity. We begin by providing a brief, up-to-date review of the main approaches used to measure environmental efficiency, with

  12. Evaluation of black carbon estimations in global aerosol models

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2009-11-01

    Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD retrievals from AERONET and Ozone Monitoring Instrument (OMI and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model

  13. [Animal models of autoimmune prostatitis and their evaluation criteria].

    Science.gov (United States)

    Shen, Jia-ming; Lu, Jin-chun; Yao, Bing

    2016-03-01

    Chronic prostatitis is a highly prevalent disease of unclear etiology. Researches show that autoimmune reaction is one cause of the problem. An effective animal model may help a lot to understand the pathogenesis and find proper diagnostic and therapeutic strategies of the disease. Currently used autoimmune prostatitis-related animal models include those of age-dependent spontaneous prostatitis, autoimmune regulator-dependent spontaneous prostatitis, self antigen-induced prostatitis, and steroid-induced prostatitis. Whether an animal model of autoimmune prostatitis is successfully established can be evaluated mainly from the five aspects: histology, morphology, specific antigens, inflammatory factors, and pain intensity.

  14. EcoMark: Evaluating Models of Vehicular Environmental Impact

    DEFF Research Database (Denmark)

    Guo, Chenjuan; Ma, Mike; Yang, Bin

    2012-01-01

    The reduction of greenhouse gas (GHG) emissions from transporta- tion is essential for achieving politically agreed upon emissions re- duction targets that aim to combat global climate change. So-called eco-routing and eco-driving are able to substantially reduce GHG emissions caused by vehicular...... the vehicle travels in. We develop an evaluation framework, called EcoMark, for such environmental impact models. In addition, we survey all eleven state-of-the-art impact models known to us. To gain insight into the capabilities of the models and to understand the effectiveness of the EcoMark, we apply...

  15. Evaluation of candidate geomagnetic field models for IGRF-12

    DEFF Research Database (Denmark)

    Thébault, Erwan; Finlay, Chris; Alken, Patrick

    2015-01-01

    Background: The 12th revision of the International Geomagnetic Reference Field (IGRF) was issued in December 2014 by the International Association of Geomagnetism and Aeronomy (IAGA) Division V Working Group V-MOD (http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html). This revision comprises new spherical...... by the British Geological Survey (UK), DTU Space (Denmark), ISTerre (France), IZMIRAN (Russia), NOAA/NGDC (USA), GFZ Potsdam (Germany), NASA/GSFC (USA), IPGP (France), LPG Nantes (France), and ETH Zurich (Switzerland). Each candidate model was carefully evaluated and compared to all other models and a mean model...

  16. Indicators to support the dynamic evaluation of air quality models

    Science.gov (United States)

    Thunis, P.; Clappier, A.

    2014-12-01

    Air quality models are useful tools for the assessment and forecast of pollutant concentrations in the atmosphere. Most of the evaluation process relies on the “operational phase” or in other words the comparison of model results with available measurements which provides insight on the model capability to reproduce measured concentrations for a given application. But one of the key advantages of air quality models lies in their ability to assess the impact of precursor emission reductions on air quality levels. Models are then used in a dynamic mode (i.e. response to a change in a given model input data) for which evaluation of the model performances becomes a challenge. The objective of this work is to propose common indicators and diagrams to facilitate the understanding of model responses to emission changes when models are to be used for policy support. These indicators are shown to be useful to retrieve information on the magnitude of the locally produced impacts of emission reductions on concentrations with respect to the “external to the domain” contribution but also to identify, distinguish and quantify impacts arising from different factors (different precursors). In addition information about the robustness of the model results is provided. As such these indicators might reveal useful as first screening methodology to identify the feasibility of a given action as well as to prioritize the factors on which to act for an increased efficiency. Finally all indicators are made dimensionless to facilitate the comparison of results obtained with different models, different resolutions, or on different geographical areas.

  17. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  18. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  19. Uranium resources evaluation model as an exploration tool

    International Nuclear Information System (INIS)

    Ruzicka, V.

    1976-01-01

    Evaluation of uranium resources, as conducted by the Uranium Resources Evaluation Section of the Geological Survey of Canada, comprises operations analogous with those performed during the preparatory stages of uranium exploration. The uranium resources evaluation model, simulating the estimation process, can be divided into four steps. The first step includes definition of major areas and ''unit subdivisions'' for which geological data are gathered, coded, computerized and retrieved. Selection of these areas and ''unit subdivisions'' is based on a preliminary appraisal of their favourability for uranium mineralization. The second step includes analyses of the data, definition of factors controlling uranium minearlization, classification of uranium occurrences into genetic types, and final delineation of favourable areas; this step corresponds to the selection of targets for uranium exploration. The third step includes geological field work; it is equivalent to geological reconnaissance in exploration. The fourth step comprises computation of resources; the preliminary evaluation techniques in the exploration are, as a rule, analogous with the simplest methods employed in the resource evaluation. The uranium resources evaluation model can be conceptually applied for decision-making during exploration or for formulation of exploration strategy using the quantified data as weighting factors. (author)

  20. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Niculae Feleaga

    2006-04-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  1. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Liliana Feleaga

    2006-06-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  2. A Logic Model for Evaluating the Academic Health Department.

    Science.gov (United States)

    Erwin, Paul Campbell; McNeely, Clea S; Grubaugh, Julie H; Valentine, Jennifer; Miller, Mark D; Buchanan, Martha

    2016-01-01

    Academic Health Departments (AHDs) are collaborative partnerships between academic programs and practice settings. While case studies have informed our understanding of the development and activities of AHDs, there has been no formal published evaluation of AHDs, either singularly or collectively. Developing a framework for evaluating AHDs has potential to further aid our understanding of how these relationships may matter. In this article, we present a general theory of change, in the form of a logic model, for how AHDs impact public health at the community level. We then present a specific example of how the logic model has been customized for a specific AHD. Finally, we end with potential research questions on the AHD based on these concepts. We conclude that logic models are valuable tools, which can be used to assess the value and ultimate impact of the AHD.

  3. Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Luca Chiaraviglio

    2016-06-01

    Full Text Available We present a model to evaluate the server lifetime in cloud data centers (DCs. In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server.

  4. Towards the quantitative evaluation of visual attention models.

    Science.gov (United States)

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Evaluation of a Postdischarge Call System Using the Logic Model.

    Science.gov (United States)

    Frye, Timothy C; Poe, Terri L; Wilson, Marisa L; Milligan, Gary

    2018-02-01

    This mixed-method study was conducted to evaluate a postdischarge call program for congestive heart failure patients at a major teaching hospital in the southeastern United States. The program was implemented based on the premise that it would improve patient outcomes and overall quality of life, but it had never been evaluated for effectiveness. The Logic Model was used to evaluate the input of key staff members to determine whether the outputs and results of the program matched the expectations of the organization. Interviews, online surveys, reviews of existing patient outcome data, and reviews of publicly available program marketing materials were used to ascertain current program output. After analyzing both qualitative and quantitative data from the evaluation, recommendations were made to the organization to improve the effectiveness of the program.

  6. Knowledge management: Postgraduate Alternative Evaluation Model (MAPA in Brazil

    Directory of Open Access Journals (Sweden)

    Deisy Cristina Corrêa Igarashi

    2013-07-01

    Full Text Available The Brazilian stricto sensu postgraduate programs that include master and / or doctorate courses are evaluated by Coordination for the Improvement of Higher Education Personnel (CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior. The evaluation method used by CAPES is recognized in national and international context. However, several elements of the evaluation method can be improved. For example: to consider programs diversity, heterogeneity and specificities; to reduce subjectivity and to explain how indicators are grouped into different dimensions to generate a final result, which is scoring level reached by a program. This study aims to analyze the evaluation process by CAPES, presenting questions, difficulties and objections raised by researchers. From the analysis, the study proposes an alternative evaluation model for postgraduate (MAPA - Modelo de Avaliação para Pós graduação Alternativo which incorporates fuzzy logic in result analysis to minimize limitations identified. The MAPA was applied in three postgraduate programs, allowing: (1 better understanding of procedures used for the evaluation, (2 identifying elements that need regulation, (3 characterization of indicators that generate local evaluation, (4 support in medium and long term planning.

  7. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  8. Evaluation of model quality predictions in CASP9

    KAUST Repository

    Kryshtafovych, Andriy

    2011-01-01

    CASP has been assessing the state of the art in the a priori estimation of accuracy of protein structure prediction since 2006. The inclusion of model quality assessment category in CASP contributed to a rapid development of methods in this area. In the last experiment, 46 quality assessment groups tested their approaches to estimate the accuracy of protein models as a whole and/or on a per-residue basis. We assessed the performance of these methods predominantly on the basis of the correlation between the predicted and observed quality of the models on both global and local scales. The ability of the methods to identify the models closest to the best one, to differentiate between good and bad models, and to identify well modeled regions was also analyzed. Our evaluations demonstrate that even though global quality assessment methods seem to approach perfection point (weighted average per-target Pearson\\'s correlation coefficients are as high as 0.97 for the best groups), there is still room for improvement. First, all top-performing methods use consensus approaches to generate quality estimates, and this strategy has its own limitations. Second, the methods that are based on the analysis of individual models lag far behind clustering techniques and need a boost in performance. The methods for estimating per-residue accuracy of models are less accurate than global quality assessment methods, with an average weighted per-model correlation coefficient in the range of 0.63-0.72 for the best 10 groups.

  9. Evaluation of Deep Learning Models for Predicting CO2 Flux

    Science.gov (United States)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  10. A PRODUCTIVITY EVALUATION MODEL BASED ON INPUT AND OUTPUT ORIENTATIONS

    Directory of Open Access Journals (Sweden)

    C.O. Anyaeche

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Many productivity models evaluate either the input or the output performances using standalone techniques. This sometimes gives divergent views of the same system’s results. The work reported in this article, which simultaneously evaluated productivity from both orientations, was applied on real life data. The results showed losses in productivity (–2% and price recovery (–8% for the outputs; the inputs showed productivity gain (145% but price recovery loss (–63%. These imply losses in product performances but a productivity gain in inputs. The loss in the price recovery of inputs indicates a problem in the pricing policy. This model is applicable in product diversification.

    AFRIKAANSE OPSOMMING: Die meeste produktiwiteitsmodelle evalueer of die inset- of die uitsetverrigting deur gebruik te maak van geïsoleerde tegnieke. Dit lei soms tot uiteenlopende perspektiewe van dieselfde sisteem se verrigting. Hierdie artikel evalueer verrigting uit beide perspektiewe en gebruik ware data. Die resultate toon ‘n afname in produktiwiteit (-2% en prysherwinning (-8% vir die uitsette. Die insette toon ‘n toename in produktiwiteit (145%, maar ‘n afname in prysherwinning (-63%. Dit impliseer ‘n afname in produkverrigting, maar ‘n produktiwiteitstoename in insette. Die afname in die prysherwinning van insette dui op ‘n problem in die prysvasstellingbeleid. Hierdie model is geskik vir produkdiversifikasie.

  11. A Category Based Threat Evaluation Model Using Platform Kinematics Data

    Directory of Open Access Journals (Sweden)

    Mustafa Çöçelli

    2017-08-01

    Full Text Available Command and control (C2 systems direct operators to make accurate decisions in the stressful atmosphere of the battlefield at the earliest. There are powerful tools that fuse various instant piece of information and brings summary of those in front of operators. Threat evaluation is one of the important fusion method that provides these assistance to military people. However, C2 systems could be deprived of valuable data source due to the absence of capable equipment. This situation has a bad unfavorable influence on the quality of tactical picture in front of C2 operators. In this paper, we study on the threat evaluation model that take into account these deficiencies. Our method extracts threat level of various targets mostly from their kinematics in two dimensional space. In the meantime, classification of entities around battlefield is unavailable. Only, category of targets are determined as a result of sensors process, which is the information of whether entities belong to air or surface environment. Hereby, threat evaluation model is consist of three fundamental steps that runs on entities belongs to different environment separately: the extraction of threat assessment cues, threat selection based on Bayesian Inference and the calculation of threat assessment rating. We have evaluated performance of proposed model by simulating a set of synthetic scenarios.

  12. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    Science.gov (United States)

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  13. Evaluation of Medical Education virtual Program: P3 model.

    Science.gov (United States)

    Rezaee, Rita; Shokrpour, Nasrin; Boroumand, Maryam

    2016-10-01

    In e-learning, people get involved in a process and create the content (product) and make it available for virtual learners. The present study was carried out in order to evaluate the first virtual master program in medical education at Shiraz University of Medical Sciences according to P3 Model. This is an evaluation research study with post single group design used to determine how effective this program was. All students 60 who participated more than one year in this virtual program and 21 experts including teachers and directors participated in this evaluation project. Based on the P3 e-learning model, an evaluation tool with 5-point Likert rating scale was designed and applied to collect the descriptive data. Students reported storyboard and course design as the most desirable element of learning environment (2.30±0.76), but they declared technical support as the less desirable part (1.17±1.23). Presence of such framework in this regard and using it within the format of appropriate tools for evaluation of e-learning in universities and higher education institutes, which present e-learning curricula in the country, may contribute to implementation of the present and future e-learning curricula efficiently and guarantee its implementation in an appropriate way.

  14. Evaluation of medical education virtual program: P3 model

    Directory of Open Access Journals (Sweden)

    RITA REZAEE

    2016-10-01

    Full Text Available Introduction: In e-learning, people get involved in a process and create the content (product and make it available for virtual learners. The present study was carried out in order to evaluate the first virtual master program in medical education at Shiraz University of Medical Sciences according to P3 Model. Methods: This is an evaluation research study with post single group design used to determine how effective this program was. All students 60 who participated more than one year in this virtual program and 21 experts including teachers and directors participated in this evaluation project. Based on the P3 e-learning model, an evaluation tool with 5-point Likert rating scale was designed and applied to collect the descriptive data. Results: Students reported storyboard and course design as the most desirable element of learning environment (2.30±0.76, but they declared technical support as the less desirable part (1.17±1.23. Conclusion: Presence of such framework in this regard and using it within the format of appropriate tools for evaluation of e-learning in universities and higher education institutes, which present e-learning curricula in the country, may contribute to implementation of the present and future e-learning curricula efficiently and guarantee its implementation in an appropriate way.

  15. Boussinesq Modeling of Wave Propagation and Runup over Fringing Coral Reefs, Model Evaluation Report

    National Research Council Canada - National Science Library

    Demirbilek, Zeki; Nwogu, Okey G

    2007-01-01

    This report describes evaluation of a two-dimensional Boussinesq-type wave model, BOUSS-2D, with data obtained from two laboratory experiments and two field studies at the islands of Guam and Hawaii...

  16. Modeling Tools for Drilling, Reservoir Navigation, and Formation Evaluation

    Directory of Open Access Journals (Sweden)

    Sushant Dutta

    2012-06-01

    Full Text Available The oil and gas industry routinely uses borehole tools for measuring or logging rock and fluid properties of geologic formations to locate hydrocarbons and maximize their production. Pore fluids in formations of interest are usually hydrocarbons or water. Resistivity logging is based on the fact that oil and gas have a substantially higher resistivity than water. The first resistivity log was acquired in 1927, and resistivity logging is still the foremost measurement used for drilling and evaluation. However, the acquisition and interpretation of resistivity logging data has grown in complexity over the years. Resistivity logging tools operate in a wide range of frequencies (from DC to GHz and encounter extremely high (several orders of magnitude conductivity contrast between the metal mandrel of the tool and the geologic formation. Typical challenges include arbitrary angles of tool inclination, full tensor electric and magnetic field measurements, and interpretation of complicated anisotropic formation properties. These challenges combine to form some of the most intractable computational electromagnetic problems in the world. Reliable, fast, and convenient numerical modeling of logging tool responses is critical for tool design, sensor optimization, virtual prototyping, and log data inversion. This spectrum of applications necessitates both depth and breadth of modeling software—from blazing fast one-dimensional (1-D modeling codes to advanced threedimensional (3-D modeling software, and from in-house developed codes to commercial modeling packages. In this paper, with the help of several examples, we demonstrate our approach for using different modeling software to address different drilling and evaluation applications. In one example, fast 1-D modeling provides proactive geosteering information from a deep-reading azimuthal propagation resistivity measurement. In the second example, a 3-D model with multiple vertical resistive fractures

  17. Evaluating topic model interpretability from a primary care physician perspective.

    Science.gov (United States)

    Arnold, Corey W; Oh, Andrea; Chen, Shawn; Speier, William

    2016-02-01

    Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Comprehensive environment-suitability evaluation model about Carya cathayensis

    International Nuclear Information System (INIS)

    Da-Sheng, W.; Li-Juan, L.; Qin-Fen, Y.

    2013-01-01

    On the relation between the suitable environment and the distribution areas of Carya cathayensis Sarg., the current studies are mainly committed to qualitative descriptions, but did not consider quantitative models. The objective of this study was to establish a environment-suitability evaluation model which used to predict potential suitable areas of C. cathayensis. Firstly, the 3 factor data of soil type, soil parent material and soil thickness were obtained based on 2-class forest resource survey, and other factor data, which included elevation, slope, aspect, surface curvature, humidity index, and solar radiation index, were extracted from DEM (Digital Elevation Model). Additionally, the key affecting factors were defined by PCA (Principal Component Analysis), the weights of evaluation factors were determined by AHP (Analysis Hierarchy Process) and the quantitative classification of single factor was determined by membership function with fuzzy mathematics. Finally, a comprehensive environment-suitability evaluation model was established and which was also used to predict the potential suitable areas of C. cathayensis in Daoshi Town in the study region. The results showed that 85.6% of actual distribution areas were in the most suitable and more suitable regions and 11.5% in the general suitable regions

  19. Evaluating fugacity models for trace components in landfill gas

    Energy Technology Data Exchange (ETDEWEB)

    Shafi, Sophie [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Sweetman, Andrew [Department of Environmental Science, Lancaster University, Lancaster LA1 4YQ (United Kingdom); Hough, Rupert L. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Smith, Richard [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Rosevear, Alan [Science Group - Waste and Remediation, Environment Agency, Reading RG1 8DQ (United Kingdom); Pollard, Simon J.T. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom)]. E-mail: s.pollard@cranfield.ac.uk

    2006-12-15

    A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95 300 {mu}g m{sup -3}; 43 {mu}g m{sup -3}) fell within measured ranges observed in gas from landfills (24 300-180 000 {mu}g m{sup -3}; 20-70 {mu}g m{sup -3}). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas. - Fugacity for trace component in landfill gas.

  20. Evaluation of mechanistic DNB models using HCLWR CHF data

    International Nuclear Information System (INIS)

    Iwamura, Takamichi; Watanabe, Hironori; Okubo, Tsutomu; Araya, Fumimasa; Murao, Yoshio.

    1992-03-01

    An onset of departure from nucleate boiling (DNB) in light water reactor (LWR) has been generally predicted with empirical correlations. Since these correlations have less physical bases and contain adjustable empirical constants determined by best fitting of test data, applicable geometries and flow conditions are limited within the original experiment ranges. In order to obtain more universal prediction method, several mechanistic DNB models based on physical approaches have been proposed in recent years. However, the predictive capabilities of mechanistic DNB models have not been verified successfully especially for advanced LWR design purposes. In this report, typical DNB mechanistic models are reviewed and compared with critical heat flux (CHF) data for high conversion light water reactor (HCLWR). The experiments were performed using triangular 7-rods array with non-uniform axial heat flux distribution. Test pressure was 16 MPa, mass velocities ranged from 800 t0 3100 kg/s·m 2 and exit qualities from -0.07 to 0.19. The evaluated models are: 1) Wisman-Pei, 2) Chang-Lee, 3) Lee-Mudawwar, 4) Lin-Lee-Pei, and 5) Katto. The first two models are based on near-wall bubble crowding model and the other three models on sublayer dryout model. The comparison with experimental data indicated that the Weisman-Pei model agreed relatively well with the CHF data. Effects of empirical constants in each model on CHF calculation were clarified by sensitivity studies. It was also found that the magnitudes of physical quantities obtained in the course of calculation were significantly different for each model. Therefore, microscopic observation of the onset of DNB on heated surface is essential to clarify the DNB mechanism and establish a general DNB mechanistic model based on physical phenomenon. (author)

  1. Degradation modeling with application to aging and maintenance effectiveness evaluations

    International Nuclear Information System (INIS)

    Samanta, P.K.; Hsu, F.; Subduhi, M.; Vesely, W.E.

    1990-01-01

    This paper describes a modeling approach to analyze component degradation and failure data to understand the aging process of components. As used here, degradation modeling is the analysis of information on component degradation in order to develop models of the process and its implications. This particular modeling focuses on the analysis of the times of component degradations, to model how the rate of degradation changes with the age of the component. The methodology presented also discusses the effectiveness of maintenance as applicable to aging evaluations. The specific applications which are performed show quantitative models of component degradation rates and component failure rates from plant-specific data. The statistical techniques which are developed and applied allow aging trends to be effectively identified in the degradation data, and in the failure data. Initial estimates of the effectiveness of maintenance in limiting degradations from becoming failures also are developed. These results are important first steps in degradation modeling, and show that degradation can be modeled to identify aging trends. 2 refs., 8 figs

  2. Degradation modeling with application to aging and maintenance effectiveness evaluations

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.; Hsu, F.; Subudhi, M.

    1991-01-01

    This paper describes a modeling approach to analyze light water reactor component degradation and failure data to understand the aging process of components. As used here, degradation modeling is the analysis of information on component degradation in order to develop models of the process and its implications. This particular modeling focuses on the analysis of the times of component degradations, to model how the rate of degradation changes with the age of the component. The methodology presented also discusses the effectiveness of maintenance as applicable to aging evaluations. The specific applications which are performed show quantitative models of component degradation rates and component failure rates from plant-specific data. The statistical techniques which are developed and applied allow aging trends to be effectively identified in the degradation data, and in the failure data. Initial estimates of the effectiveness of maintenance in limiting degradations from becoming failures also are developed. These results are important first steps in degradation modeling, and show that degradation can be modeled to identify aging trends

  3. Evaluation of Multiclass Model Observers in PET LROC Studies

    Science.gov (United States)

    Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.

    2007-02-01

    A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise

  4. Systematic review of model-based cervical screening evaluations.

    Science.gov (United States)

    Mendes, Diana; Bains, Iren; Vanni, Tazio; Jit, Mark

    2015-05-01

    Optimising population-based cervical screening policies is becoming more complex due to the expanding range of screening technologies available and the interplay with vaccine-induced changes in epidemiology. Mathematical models are increasingly being applied to assess the impact of cervical cancer screening strategies. We systematically reviewed MEDLINE®, Embase, Web of Science®, EconLit, Health Economic Evaluation Database, and The Cochrane Library databases in order to identify the mathematical models of human papillomavirus (HPV) infection and cervical cancer progression used to assess the effectiveness and/or cost-effectiveness of cervical cancer screening strategies. Key model features and conclusions relevant to decision-making were extracted. We found 153 articles meeting our eligibility criteria published up to May 2013. Most studies (72/153) evaluated the introduction of a new screening technology, with particular focus on the comparison of HPV DNA testing and cytology (n = 58). Twenty-eight in forty of these analyses supported HPV DNA primary screening implementation. A few studies analysed more recent technologies - rapid HPV DNA testing (n = 3), HPV DNA self-sampling (n = 4), and genotyping (n = 1) - and were also supportive of their introduction. However, no study was found on emerging molecular markers and their potential utility in future screening programmes. Most evaluations (113/153) were based on models simulating aggregate groups of women at risk of cervical cancer over time without accounting for HPV infection transmission. Calibration to country-specific outcome data is becoming more common, but has not yet become standard practice. Models of cervical screening are increasingly used, and allow extrapolation of trial data to project the population-level health and economic impact of different screening policy. However, post-vaccination analyses have rarely incorporated transmission dynamics. Model calibration to country

  5. Tropical convection regimes in climate models: evaluation with satellite observations

    Directory of Open Access Journals (Sweden)

    A. K. Steiner

    2018-04-01

    Full Text Available High-quality observations are powerful tools for the evaluation of climate models towards improvement and reduction of uncertainty. Particularly at low latitudes, the most uncertain aspect lies in the representation of moist convection and interaction with dynamics, where rising motion is tied to deep convection and sinking motion to dry regimes. Since humidity is closely coupled with temperature feedbacks in the tropical troposphere, a proper representation of this region is essential. Here we demonstrate the evaluation of atmospheric climate models with satellite-based observations from Global Positioning System (GPS radio occultation (RO, which feature high vertical resolution and accuracy in the troposphere to lower stratosphere. We focus on the representation of the vertical atmospheric structure in tropical convection regimes, defined by high updraft velocity over warm surfaces, and investigate atmospheric temperature and humidity profiles. Results reveal that some models do not fully capture convection regions, particularly over land, and only partly represent strong vertical wind classes. Models show large biases in tropical mean temperature of more than 4 K in the tropopause region and the lower stratosphere. Reasonable agreement with observations is given in mean specific humidity in the lower to mid-troposphere. In moist convection regions, models tend to underestimate moisture by 10 to 40 % over oceans, whereas in dry downdraft regions they overestimate moisture by 100 %. Our findings provide evidence that RO observations are a unique source of information, with a range of further atmospheric variables to be exploited, for the evaluation and advancement of next-generation climate models.

  6. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  7. Evaluation of Differentiation Strategy in Shipping Enterprises with Simulation Model

    Science.gov (United States)

    Vaxevanou, Anthi Z.; Ferfeli, Maria V.; Damianos, Sakas P.

    2009-08-01

    The present inquiring study aims at investigating the circumstances that prevail in the European Shipping Enterprises with special reference to the Greek ones. This investigation is held in order to explore the potential implementation of strategies so as to create a unique competitive advantage [1]. The Shipping sector is composed of enterprises that are mainly activated in the following three areas: the passenger, the commercial and the naval. The main target is to create a dynamic simulation model which, with reference to the STAIR strategic model, will evaluate the strategic differential choice that some of the shipping enterprises have.

  8. Evaluating Flight Crew Performance by a Bayesian Network Model

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2018-03-01

    Full Text Available Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety.

  9. [Experimental evaluation of the spraying disinfection efficiency on dental models].

    Science.gov (United States)

    Zhang, Yi; Fu, Yuan-fei; Xu, Kan

    2013-08-01

    To evaluate the disinfect effect after spraying a new kind of disinfectant on the dental plaster models. The germ-free plaster samples, which were smeared with bacteria compound including Staphylococcus aureus, Escherichia coli, Saccharomyces albicans, Streptococcus mutans and Actinomyces viscosus were sprayed with disinfectants (CaviCide) and glutaraldehyde individually. In one group(5 minutes later) and another group(15 minutes later), the colonies were counted for statistical analysis after sampling, inoculating, and culturing which were used for evaluation of disinfecting efficiency. ANOVA was performed using SPSS12.0 software package. All sample bacteria were eradicated after spraying disinfectants(CaviCide) within 5 minutes and effective bacteria control was retained after 15 minutes. There was significant difference between the disinfecting efficiency of CaviCide and glutaraldehyde. The effect of disinfection with spraying disinfectants (CaviCide) on dental models is quick and effective.

  10. Photovoltaic performance models: an evaluation with actual field data

    Science.gov (United States)

    TamizhMani, Govindasamy; Ishioye, John-Paul; Voropayev, Arseniy; Kang, Yi

    2008-08-01

    Prediction of energy production is crucial to the design and installation of the building integrated photovoltaic systems. This prediction should be attainable based on the commonly available parameters such as system size, orientation and tilt angle. Several commercially available as well as free downloadable software tools exist to predict energy production. Six software models have been evaluated in this study and they are: PV Watts, PVsyst, MAUI, Clean Power Estimator, Solar Advisor Model (SAM) and RETScreen. This evaluation has been done by comparing the monthly, seasonaly and annually predicted data with the actual, field data obtained over a year period on a large number of residential PV systems ranging between 2 and 3 kWdc. All the systems are located in Arizona, within the Phoenix metropolitan area which lies at latitude 33° North, and longitude 112 West, and are all connected to the electrical grid.

  11. [Evaluation of national prevention campaigns against AIDS: analysis model].

    Science.gov (United States)

    Hausser, D; Lehmann, P; Dubois, F; Gutzwiller, F

    1987-01-01

    The evaluation of the "Stop-Aids" campaign is based upon a model of behaviour modification (McAlister) which includes the communication theory of McGuire and the social learning theory of Bandura. Using this model, it is possible to define key variables that are used to measure the impact of the campaign. Process evaluation allows identification of multipliers that reinforce and confirm the initial message of prevention (source) thereby encouraging behaviour modifications that are likely to reduce the transmission of HIV (condom use, no sharing of injection material, monogamous relationship, etc.). Twelve studies performed by seven teams in the three linguistic areas contribute to the project. A synthesis of these results will be performed by the IUMSP.

  12. A risk evaluation model using on-site meteorological data

    International Nuclear Information System (INIS)

    Kang, C.S.

    1979-01-01

    A model is considered in order to evaluate the potential risk from a nuclear facility directly combining the on site meteorological data. The model is utilized to evaluate the environmental consequences from the routine releases during normal plant operation as well as following postulated accidental releases. The doses to individual and risks to the population-at-large are also analyzed in conjunction with design of rad-waste management and safety systems. It is observed that the conventional analysis, which is done in two separate unaffiliated phases of releases and atmospheric dispersion tends to result in unnecessary over-design of the systems because of high resultant doses calculated by multiplication of two extreme values. (author)

  13. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    Science.gov (United States)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  14. Evaluation of fish models of soluble epoxide hydrolase inhibition.

    OpenAIRE

    Newman, J W; Denton, D L; Morisseau, C; Koger, C S; Wheelock, C E; Hinton, D E; Hammock, B D

    2001-01-01

    Substituted ureas and carbamates are mechanistic inhibitors of the soluble epoxide hydrolase (sEH). We screened a set of chemicals containing these functionalities in larval fathead minnow (Pimphales promelas) and embryo/larval golden medaka (Oryzias latipes) models to evaluate the utility of these systems for investigating sEH inhibition in vivo. Both fathead minnow and medaka sEHs were functionally similar to the tested mammalian orthologs (murine and human) with respect to substrate hydrol...

  15. Evaluating transport in the WRF model along the California coast

    OpenAIRE

    C. E. Yver; H. D. Graven; D. D. Lucas; P. J. Cameron-Smith; R. F. Keeling; R. F. Weiss

    2013-01-01

    This paper presents a step in the development of a top-down method to complement the bottom-up inventories of halocarbon emissions in California using high frequency observations, forward simulations and inverse methods. The Scripps Institution of Oceanography high-frequency atmospheric halocarbons measurement sites are located along the California coast and therefore the evaluation of transport in the chosen Weather Research Forecast (WRF) model at these sites is crucial fo...

  16. Evaluating transport in the WRF model along the California coast

    OpenAIRE

    C. Yver; H. Graven; D. D. Lucas; P. Cameron-Smith; R. Keeling; R. Weiss

    2012-01-01

    This paper presents a step in the development of a top-down method to complement the bottom-up inventories of halocarbon emissions in California using high frequency observations, forward simulations and inverse methods. The Scripps Institution of Oceanography high-frequency atmospheric halocarbon measurement sites are located along the California coast and therefore the evaluation of transport in the chosen Weather Research Forecast (WRF) model at these sites is crucial for inverse mo...

  17. PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS

    OpenAIRE

    Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen

    2017-01-01

    Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...

  18. Hybrid Model for e-Learning Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Suzana M. Savic

    2012-02-01

    Full Text Available E-learning is becoming increasingly important for the competitive advantage of economic organizations and higher education institutions. Therefore, it is becoming a significant aspect of quality which has to be integrated into the management system of every organization or institution. The paper examines e-learning quality characteristics, standards, criteria and indicators and presents a multi-criteria hybrid model for e-learning quality evaluation based on the method of Analytic Hierarchy Process, trend analysis, and data comparison.

  19. Evaluating between-pathway models with expression data.

    Science.gov (United States)

    Hescott, B J; Leiserson, M D M; Cowen, L J; Slonim, D K

    2010-03-01

    Between-pathway models (BPMs) are network motifs consisting of pairs of putative redundant pathways. In this article, we show how adding another source of high-throughput data--microarray gene expression data from knockout experiments--allows us to identify a compensatory functional relationship between genes from the two BPM pathways. We evaluate the quality of the BPMs from four different studies, and we describe how our methods might be extended to refine pathways.

  20. Opioid Abuse after Traumatic Brain Injury: Evaluation Using Rodent Models

    Science.gov (United States)

    2015-09-01

    craniotomy was cut with a trephine by hand over the right motor cortex . An injury cannula was fashioned from the hub of a female leur-lock 20g needle...ABSTRACT This project evaluated the effect of a moderate-level brain injury on risk for opioid abuse using preclinical models in rats . We assessed the...effect of brain injury on the rewarding effects of oxycodone in three rat self-administration procedures and found significant differences in the

  1. Doctoral Dissertation Supervision: Identification and Evaluation of Models

    Directory of Open Access Journals (Sweden)

    Ngozi Agu

    2014-01-01

    Full Text Available Doctoral research supervision is one of the major avenues for sustaining students’ satisfaction with the programme, preparing students to be independent researchers and effectively initiating students into the academic community. This work reports doctoral students’ evaluation of their various supervision models, their satisfaction with these supervision models, and development of research-related skills. The study used a descriptive research design and was guided by three research questions and two hypotheses. A sample of 310 Ph.D. candidates drawn from a federal university in Eastern part of Nigeria was used for this study. The data generated through the questionnaire was analyzed using descriptive statistics and t-tests. Results show that face-to-face interactive model was not only the most frequently used, but also the most widely adopted in doctoral thesis supervision while ICT-based models were rarely used. Students supervised under face-to-face interactive model reported being more satisfied with dissertation supervision than those operating under face-to-face noninteractive model. However, students supervised under these two models did not differ significantly in their perceived development in research-related skills.

  2. Model Test Bed for Evaluating Wave Models and Best Practices for Resource Assessment and Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Yang, Zhaoqing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Wang, Taiping [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Gunawan, Budi [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Dallman, Ann Renee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies

    2016-03-01

    A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending on the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.

  3. Modelling of nutrient partitioning in growing pigs to predict their anatomical body composition. 2. Model evaluation

    NARCIS (Netherlands)

    Halas, V.; Dijkstra, J.; Babinszky, L.; Verstegen, M.W.A.; Gerrits, W.J.J.

    2004-01-01

    The objective of the present paper was to evaluate a dynamic mechanistic model for growing and fattening pigs presented in a companion paper. The model predicted the rate of protein and fat deposition (chemical composition), rate of tissue deposition (anatomical composition) and performance of pigs

  4. How Useful Are Our Models? Pre-Service and Practicing Teacher Evaluations of Technology Integration Models

    Science.gov (United States)

    Kimmons, Royce; Hall, Cassidy

    2018-01-01

    We report on a survey of K-12 teachers and teacher candidates wherein participants evaluated known models (e.g., TPACK, SAMR, RAT, TIP) and provided insight on what makes a model valuable for them in the classroom. Results indicated that: (1) technology integration should be coupled with good theory to be effective, (2) classroom experience did…

  5. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  6. Evaluating Learner Autonomy: A Dynamic Model with Descriptors

    Directory of Open Access Journals (Sweden)

    Maria Giovanna Tassinari

    2012-03-01

    Full Text Available Every autonomous learning process should entail an evaluation of the learner’s competencies for autonomy. The dynamic model of learner autonomy described in this paper is a tool designed in order to support the self-assessment and evaluation of learning competencies and to help both learners and advisors to focus on relevant aspects of the learning process. The dynamic model accounts for cognitive, metacognitive, action-oriented and affective components of learner autonomy and provides descriptors of learners’ attitudes, competencies and behaviors. It is dynamic in order to allow learners to focus on their own needs and goals.The model (http://www.sprachenzentrum.fuberlin.de/v/autonomiemodell/index.html has been validated in several workshops with experts at the Université Nancy 2, France and at the Freie Universität Berlin, Germany and tested by students, advisors and teachers. It is currently used at the Centre for Independent Language Learning at the Freie Universität Berlin for language advising. Learners can freely choose the components they would like to assess themselves in. Their assessment is then discussed in an advising session, where the learner and the advisor can compare their perspectives, focus on single aspects of the leaning process and set goals for further learning. The students’ feedback gathered in my PhD investigation shows that they are able to benefit from this evaluation; their awareness, self-reflection and decision-making in the autonomous learning process improved.

  7. ICT evaluation models and performance of medium and small enterprises

    Directory of Open Access Journals (Sweden)

    Bayaga Anass

    2014-01-01

    Full Text Available Building on prior research related to (1 impact of information communication technology (ICT and (2 operational risk management (ORM in the context of medium and small enterprises (MSEs, the focus of this study was to investigate the relationship between (1 ICT operational risk management (ORM and (2 performances of MSEs. To achieve the focus, the research investigated evaluating models for understanding the value of ICT ORM in MSEs. Multiple regression, Repeated-Measures Analysis of Variance (RM-ANOVA and Repeated-Measures Multivariate Analysis of Variance (RM-MANOVA were performed. The findings of the distribution revealed that only one variable made a significant percentage contribution to the level of ICT operation in MSEs, the Payback method (β = 0.410, p < .000. It may thus be inferred that the Payback method is the prominent variable, explaining the variation in level of evaluation models affecting ICT adoption within MSEs. Conclusively, in answering the two questions (1 degree of variability explained and (2 predictors, the results revealed that the variable contributed approximately 88.4% of the variations in evaluation models affecting ICT adoption within MSEs. The analysis of variance also revealed that the regression coefficients were real and did not occur by chance

  8. Risk assessment and remedial policy evaluation using predictive modeling

    International Nuclear Information System (INIS)

    Linkov, L.; Schell, W.R.

    1996-01-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment

  9. Evaluation of a distributed catchment scale water balance model

    Science.gov (United States)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  10. A Reusable Framework for Regional Climate Model Evaluation

    Science.gov (United States)

    Hart, A. F.; Goodale, C. E.; Mattmann, C. A.; Lean, P.; Kim, J.; Zimdars, P.; Waliser, D. E.; Crichton, D. J.

    2011-12-01

    Climate observations are currently obtained through a diverse network of sensors and platforms that include space-based observatories, airborne and seaborne platforms, and distributed, networked, ground-based instruments. These global observational measurements are critical inputs to the efforts of the climate modeling community and can provide a corpus of data for use in analysis and validation of climate models. The Regional Climate Model Evaluation System (RCMES) is an effort currently being undertaken to address the challenges of integrating this vast array of observational climate data into a coherent resource suitable for performing model analysis at the regional level. Developed through a collaboration between the NASA Jet Propulsion Laboratory (JPL) and the UCLA Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), the RCMES uses existing open source technologies (MySQL, Apache Hadoop, and Apache OODT), to construct a scalable, parametric, geospatial data store that incorporates decades of observational data from a variety of NASA Earth science missions, as well as other sources into a consistently annotated, highly available scientific resource. By eliminating arbitrary partitions in the data (individual file boundaries, differing file formats, etc), and instead treating each individual observational measurement as a unique, geospatially referenced data point, the RCMES is capable of transforming large, heterogeneous collections of disparate observational data into a unified resource suitable for comparison to climate model output. This facility is further enhanced by the availability of a model evaluation toolkit which consists of a set of Python libraries, a RESTful web service layer, and a browser-based graphical user interface that allows for orchestration of model-to-data comparisons by composing them visually through web forms. This combination of tools and interfaces dramatically simplifies the process of interacting with and

  11. Evaluation of the Current State of Integrated Water Quality Modelling

    Science.gov (United States)

    Arhonditsis, G. B.; Wellen, C. C.; Ecological Modelling Laboratory

    2010-12-01

    Environmental policy and management implementation require robust methods for assessing the contribution of various point and non-point pollution sources to water quality problems as well as methods for estimating the expected and achieved compliance with the water quality goals. Water quality models have been widely used for creating the scientific basis for management decisions by providing a predictive link between restoration actions and ecosystem response. Modelling water quality and nutrient transport is challenging due a number of constraints associated with the input data and existing knowledge gaps related to the mathematical description of landscape and in-stream biogeochemical processes. While enormous effort has been invested to make watershed models process-based and spatially-distributed, there has not been a comprehensive meta-analysis of model credibility in watershed modelling literature. In this study, we evaluate the current state of integrated water quality modeling across the range of temporal and spatial scales typically utilized. We address several common modeling questions by providing a quantitative assessment of model performance and by assessing how model performance depends on model development. The data compiled represent a heterogeneous group of modeling studies, especially with respect to complexity, spatial and temporal scales and model development objectives. Beginning from 1992, the year when Beven and Binley published their seminal paper on uncertainty analysis in hydrological modelling, and ending in 2009, we selected over 150 papers fitting a number of criteria. These criteria involved publications that: (i) employed distributed or semi-distributed modelling approaches; (ii) provided predictions on flow and nutrient concentration state variables; and (iii) reported fit to measured data. Model performance was quantified with the Nash-Sutcliffe Efficiency, the relative error, and the coefficient of determination. Further, our

  12. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    Science.gov (United States)

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  13. Biomechanical Evaluations of Hip Fracture Using Finite Element Model that Models Individual Differences of Femur

    OpenAIRE

    田中, 英一; TANAKA, Eiichi; 山本, 創太; YAMAMOTO, Sota; 坂本, 誠二; SAKAMOTO, Seiji; 中西, 孝文; NAKANISHI, Takafumi; 原田, 敦; HARADA, Atsushi; 水野, 雅士; MIZUNO, Masashi

    2004-01-01

    This paper is concerned with an individual finite element modeling system for femur and biomechanical evaluations of the influences of loading conditions, bone shape and bone density on risks of hip fracture. Firstly, a method to construct an individual finite element model by morphological parameters that represent femoral shapes was developed. Using the models with different shapes constructed by this method, the effects of fall direction, posture of upper body, femur shape and bone density...

  14. Evaluation and hydrological modelization in the natural hazard prevention

    International Nuclear Information System (INIS)

    Pla Sentis, Ildefonso

    2011-01-01

    Soil degradation affects negatively his functions as a base to produce food, to regulate the hydrological cycle and the environmental quality. All over the world soil degradation is increasing partly due to lacks or deficiencies in the evaluations of the processes and causes of this degradation on each specific situation. The processes of soil physical degradation are manifested through several problems as compaction, runoff, hydric and Eolic erosion, landslides with collateral effects in situ and in the distance, often with disastrous consequences as foods, landslides, sedimentations, droughts, etc. These processes are frequently associated to unfavorable changes into the hydrologic processes responsible of the water balance and soil hydric regimes, mainly derived to soil use changes and different management practices and climatic changes. The evaluation of these processes using simple simulation models; under several scenarios of climatic change, soil properties and land use and management; would allow to predict the occurrence of this disastrous processes and consequently to select and apply the appropriate practices of soil conservation to eliminate or reduce their effects. This simulation models require, as base, detailed climatic information and hydrologic soil properties data. Despite of the existence of methodologies and commercial equipment (each time more sophisticated and precise) to measure the different physical and hydrological soil properties related with degradation processes, most of them are only applicable under really specific or laboratory conditions. Often indirect methodologies are used, based on relations or empiric indexes without an adequate validation, that often lead to expensive mistakes on the evaluation of soil degradation processes and their effects on natural disasters. It could be preferred simple field methodologies, direct and adaptable to different soil types and climates and to the sample size and the spatial variability of the

  15. Risk Evaluation of Railway Coal Transportation Network Based on Multi Level Grey Evaluation Model

    Science.gov (United States)

    Niu, Wei; Wang, Xifu

    2018-01-01

    The railway transport mode is currently the most important way of coal transportation, and now China’s railway coal transportation network has become increasingly perfect, but there is still insufficient capacity, some lines close to saturation and other issues. In this paper, the theory and method of risk assessment, analytic hierarchy process and multi-level gray evaluation model are applied to the risk evaluation of coal railway transportation network in China. Based on the example analysis of Shanxi railway coal transportation network, to improve the internal structure and the competitiveness of the market.

  16. Animal Models for Evaluation of Bone Implants and Devices: Comparative Bone Structure and Common Model Uses.

    Science.gov (United States)

    Wancket, L M

    2015-09-01

    Bone implants and devices are a rapidly growing field within biomedical research, and implants have the potential to significantly improve human and animal health. Animal models play a key role in initial product development and are important components of nonclinical data included in applications for regulatory approval. Pathologists are increasingly being asked to evaluate these models at the initial developmental and nonclinical biocompatibility testing stages, and it is important to understand the relative merits and deficiencies of various species when evaluating a new material or device. This article summarizes characteristics of the most commonly used species in studies of bone implant materials, including detailed information about the relevance of a particular model to human bone physiology and pathology. Species reviewed include mice, rats, rabbits, guinea pigs, dogs, sheep, goats, and nonhuman primates. Ultimately, a comprehensive understanding of the benefits and limitations of different model species will aid in rigorously evaluating a novel bone implant material or device. © The Author(s) 2015.

  17. Evaluation of cat brain infarction model using microPET

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. J.; Lee, D. S.; Kim, J. H.; Hwang, D. W.; Jung, J. G.; Lee, M. C [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Lim, S. M [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of)

    2004-07-01

    PET has some disadvantage in the imaging of small animal due to poor resolution. With the advance of microPET scanner, it is possible to image small animals. However, the image quality was not so much satisfactory as human image. As cats have relatively large sized brain, cat brain imaging was superior to mice or rat. In this study, we established the cat brain infarction model and evaluate it and its temporal change using microPET scanner. Two adult male cats were used. Anesthesia was done with xylazine and ketamine HCl. A burr hole was made at 1cm right lateral to the bregma. Collagenase type IV 10 ul was injected using 30G needle for 5 minutes to establish the infarction model. F-18 FDG microPET (Concorde Microsystems Inc., Knoxville. TN) scans were performed 1. 11 and 32 days after the infarction. In addition. 18F-FDG PET scans were performed using Gemini PET scanner (Philips medical systems. CA, USA) 13 and 47 days after the infarction. Two cat brain infarction models were established. The glucose metabolism of an infraction lesion improved with time. An infarction lesion was also distinguishable in the Gemini PET scan. We successfully established the cat brain infarction model and evaluated the infarcted lesion and its temporal change using F-18 FDG microPET scanner.

  18. Evaluation of cat brain infarction model using microPET

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Jin; Lee, Dong Soo; Kim, Yun Hui; Hwang, Do Won; Kim, Jin Su; Chung, June Key; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of); Lim, Sang Moo [Korea Institite of Radiological and Medical Sciences, Seoul (Korea, Republic of)

    2004-12-01

    PET has some disadvantage in the imaging of small animal due to poor resolution. With the advent of microPET scanner, it is possible to image small animals. However, the image quality was not good enough as human image. Due to larger brain, cat brain imaging was superior to mouse or rat. In this study, we established the cat brain infarction model and evaluate it and its temporal change using microPET scanner. Two adult male cats were used. Anesthesia was done with xylazine and ketamine HCI. A burr hole was made at 1 cm right lateral to the bregma. Collagenase type IV 10 {mu}l was injected using 30 G needle for 5 minutes to establish the infarction model. {sup 18}F-FDG microPET (Concorde Microsystems Inc., Knoxville, TN) scans were performed 1, 11 and 32 days after the infarction. In addition, {sup 18}F-FDG PET scans were performed using human PET scanner (Gemini, Philips medical systems, CA, USA) 13 and 47 days after the infarction. Two cat brain infarction models were established. The glucose metabolism of an infarction lesion improved with time. An infarction lesion was also distinguishable in the human PET scan. We successfully established the cat brain infarction model and evaluated the infarcted lesion and its temporal change using {sup 18}F-FDG microPET scanner.

  19. Development and Evaluation of Amino Acid Molecular Models

    Directory of Open Access Journals (Sweden)

    Aparecido R. Silva

    2007-05-01

    Full Text Available The comprehension of structure and function of proteins has a tight relationshipwith the development of structural biology. However, biochemistry students usuallyfind difficulty to visualize the structures when they use only schematic drawings ofdidactic books. The representation of three-dimensional structures of somebiomolecules with ludic models, built with representative units, have supplied tothe students and teachers a successfully experience to better visualize andcorrelate the structures to the real molecules. The present work shows thedeveloped models and the process to produce the representative units of the mainamino acids in industrial scale. The design and applicability of the representativeunits were discussed with many teachers and some suggestions wereimplemented to the models. The preliminary evaluation and perspective ofutilization by researchers show that the work is in the right direction. At the actualstage, the models are defined, prototypes were made and will be presented in thismeeting. The moulds for the units are at the final stage of construction and trial inspecialized tool facilities. The last term will consist of an effective evaluation of thedidactic tool for the teaching/learning process in Structural Molecular Biology. Theevaluation protocol is being elaborated containing simple and objective questions,similar to those used in research on science teaching.

  20. Evaluation of cat brain infarction model using microPET

    International Nuclear Information System (INIS)

    Lee, Jong Jin; Lee, Dong Soo; Kim, Yun Hui; Hwang, Do Won; Kim, Jin Su; Chung, June Key; Lee, Myung Chul; Lim, Sang Moo

    2004-01-01

    PET has some disadvantage in the imaging of small animal due to poor resolution. With the advent of microPET scanner, it is possible to image small animals. However, the image quality was not good enough as human image. Due to larger brain, cat brain imaging was superior to mouse or rat. In this study, we established the cat brain infarction model and evaluate it and its temporal change using microPET scanner. Two adult male cats were used. Anesthesia was done with xylazine and ketamine HCI. A burr hole was made at 1 cm right lateral to the bregma. Collagenase type IV 10 μl was injected using 30 G needle for 5 minutes to establish the infarction model. 18 F-FDG microPET (Concorde Microsystems Inc., Knoxville, TN) scans were performed 1, 11 and 32 days after the infarction. In addition, 18 F-FDG PET scans were performed using human PET scanner (Gemini, Philips medical systems, CA, USA) 13 and 47 days after the infarction. Two cat brain infarction models were established. The glucose metabolism of an infarction lesion improved with time. An infarction lesion was also distinguishable in the human PET scan. We successfully established the cat brain infarction model and evaluated the infarcted lesion and its temporal change using 18 F-FDG microPET scanner

  1. Evaluation of cat brain infarction model using microPET

    International Nuclear Information System (INIS)

    Lee, J. J.; Lee, D. S.; Kim, J. H.; Hwang, D. W.; Jung, J. G.; Lee, M. C; Lim, S. M

    2004-01-01

    PET has some disadvantage in the imaging of small animal due to poor resolution. With the advance of microPET scanner, it is possible to image small animals. However, the image quality was not so much satisfactory as human image. As cats have relatively large sized brain, cat brain imaging was superior to mice or rat. In this study, we established the cat brain infarction model and evaluate it and its temporal change using microPET scanner. Two adult male cats were used. Anesthesia was done with xylazine and ketamine HCl. A burr hole was made at 1cm right lateral to the bregma. Collagenase type IV 10 ul was injected using 30G needle for 5 minutes to establish the infarction model. F-18 FDG microPET (Concorde Microsystems Inc., Knoxville. TN) scans were performed 1. 11 and 32 days after the infarction. In addition. 18F-FDG PET scans were performed using Gemini PET scanner (Philips medical systems. CA, USA) 13 and 47 days after the infarction. Two cat brain infarction models were established. The glucose metabolism of an infraction lesion improved with time. An infarction lesion was also distinguishable in the Gemini PET scan. We successfully established the cat brain infarction model and evaluated the infarcted lesion and its temporal change using F-18 FDG microPET scanner

  2. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  3. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  4. Evaluation of the St. Lucia geothermal resource: macroeconomic models

    Energy Technology Data Exchange (ETDEWEB)

    Burris, A.E.; Trocki, L.K.; Yeamans, M.K.; Kolstad, C.D.

    1984-08-01

    A macroeconometric model describing the St. Lucian economy was developed using 1970 to 1982 economic data. Results of macroeconometric forecasts for the period 1983 through 1985 show an increase in gross domestic product (GDP) for 1983 and 1984 with a decline in 1985. The rate of population growth is expected to exceed GDP growth so that a small decline in per capita GDP will occur. We forecast that garment exports will increase, providing needed employment and foreign exchange. To obtain a longer-term but more general outlook on St. Lucia's economy, and to evaluate the benefit of geothermal energy development, we applied a nonlinear programming model. The model maximizes discounted cumulative consumption.

  5. Evaluating the performance and utility of regional climate models

    DEFF Research Database (Denmark)

    Christensen, Jens H.; Carter, Timothy R.; Rummukainen, Markku

    2007-01-01

    This special issue of Climatic Change contains a series of research articles documenting co-ordinated work carried out within a 3-year European Union project 'Prediction of Regional scenarios and Uncertainties for Defining European Climate change risks and Effects' (PRUDENCE). The main objective...... of the PRUDENCE project was to provide high resolution climate change scenarios for Europe at the end of the twenty-first century by means of dynamical downscaling (regional climate modelling) of global climate simulations. The first part of the issue comprises seven overarching PRUDENCE papers on: (1) the design...... of the model simulations and analyses of climate model performance, (2 and 3) evaluation and intercomparison of simulated climate changes, (4 and 5) specialised analyses of impacts on water resources and on other sectors including agriculture, ecosystems, energy, and transport, (6) investigation of extreme...

  6. Models of evaluation of public joint-stock property management

    Science.gov (United States)

    Yakupova, N. M.; Levachkova, S.; Absalyamova, S. G.; Kvon, G.

    2017-12-01

    The paper deals with the models of evaluation of performance of both the management company and the individual subsidiaries on the basis of a combination of elements and multi-parameter and target approaches. The article shows that due to the power of multi-dimensional and multi-directional indicators of financial and economic activity it is necessary to assess the degree of achievement of the objectives with the use of multivariate ordinal model as a set of indicators, ordered by growth so that the maintenance of this order on a long interval of time will ensure the effective functioning of the enterprise in the long term. It is shown that these models can be regarded as the monitoring tools of implementation of strategies and guide the justification effectiveness of implementation of management decisions.

  7. Evaluation of Thermal Margin Analysis Models for SMART

    International Nuclear Information System (INIS)

    Seo, Kyong Won; Kwon, Hyuk; Hwang, Dae Hyun

    2011-01-01

    Thermal margin of SMART would be analyzed by three different methods. The first method is subchannel analysis by MATRA-S code and it would be a reference data for the other two methods. The second method is an on-line few channel analysis by FAST code that would be integrated into SCOPS/SCOMS. The last one is a single channel module analysis by safety analysis. Several thermal margin analysis models for SMART reactor core by subchannel analysis were setup and tested. We adopted a strategy of single stage analysis for thermal analysis of SMART reactor core. The model should represent characteristics of the SMART reactor core including hot channel. The model should be simple as possible to be evaluated within reasonable time and cost

  8. Peformance Tuning and Evaluation of a Parallel Community Climate Model

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Worley, P.H.; Hammond, S.

    1999-11-13

    The Parallel Community Climate Model (PCCM) is a message-passing parallelization of version 2.1 of the Community Climate Model (CCM) developed by researchers at Argonne and Oak Ridge National Laboratories and at the National Center for Atmospheric Research in the early to mid 1990s. In preparation for use in the Department of Energy's Parallel Climate Model (PCM), PCCM has recently been updated with new physics routines from version 3.2 of the CCM, improvements to the parallel implementation, and ports to the SGIKray Research T3E and Origin 2000. We describe our experience in porting and tuning PCCM on these new platforms, evaluating the performance of different parallel algorithm options and comparing performance between the T3E and Origin 2000.

  9. ECP evaluation by water radiolysis and ECP model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hanawa, S.; Nakamura, T.; Uchida, S. [Japan Atomic Energy Agency, Tokai-mura, Ibaraki (Japan); Kus, P.; Vsolak, R.; Kysela, J. [Nuclear Research Inst. Rez plc, Rez (Czech Republic)

    2010-07-01

    In-pile ECP measurements data was evaluated by water radiolysis calculations. The data was obtained by using an in-pile loop in an experimental reactor, LVR-15, at the Nuclear Research Institute (NRI) in Czech Republic. Three types of ECP sensors, a Pt electrode, an Ag/AgCl sensor and a zirconia membrane sensor containing Ag/Ag{sub 2}O were used at several levels of the irradiation rig at various neutron flux and gamma rates. For water radiolysis calculation, the in-pile loop was modeled to several nodes following their design specifications, operating conditions such as flow rates, dose rate distributions of neutron and gamma-ray and so on. Concentration of chemical species along the water flow was calculated by a radiolysis code, WRAC-J. The radiolysis calculation results were transferred to an ECP model. In the model, anodic and cathodic current densities were calculated with combination of an electrochemistry model and an oxide film growth model. The measured ECP data were compared with the radiolysis/ECP calculation results, and applicability the of radiolysis model was confirmed. In addition, anomalous phenomenon appears in the in-pile loop was also investigated by radiolysis calculations. (author)

  10. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  11. Heating Parameter Estimation Using Coaxial Thermocouple Gages in Wind Tunnel Test Articles.

    Science.gov (United States)

    1984-12-01

    Attack a Emissivity G Parameter Vector Pn Measurement Vector at nth Time Point p Density 0 Stefan-Boltzmann Constant 6 Transition Matrix APc Scaling...for. The radiation is modeled using the Stefan-Boltzmann Law, q = 60(U 4 - U, 4 ) (A-9) where 8 radiative emissivity a Stefan-Bol tzmann constant U...w00 I- 000 0 0111c :0 i zZ Z-4lwr I- E . - t J K - IL HHO "W 6i 0WZWZWO&000OW *0 . 0 - .- - -4 4 1"- 1 Lii w LiiU Li LI Li Lij Liw w ~ o 0 0wm ~wW6~w d

  12. Review of models used for determining consequences of UF6 release: Model evaluation report. Volume 2

    International Nuclear Information System (INIS)

    Nair, S.K.; Chambers, D.B.; Park, S.H.; Radonjic, Z.R.; Coutts, P.T.; Lewis, C.J.; Hammonds, J.S.; Hoffman, F.O.

    1997-11-01

    Three uranium hexafluoride-(UF 6 -) specific models--HGSYSTEM/UF 6 , Science Application International Corporation, and RTM-96; three dense-gas models--DEGADIS, SLAB, and the Chlorine Institute methodology; and one toxic chemical model--AFTOX--are evaluated on their capabilities to simulate the chemical reactions, thermodynamics, and atmospheric dispersion of UF 6 released from accidents at nuclear fuel-cycle facilities, to support Integrated Safety Analysis, Emergency Response Planning, and Post-Accident Analysis. These models are also evaluated for user-friendliness and for quality assurance and quality control features, to ensure the validity and credibility of the results. Model performance evaluations are conducted for the three UF 6 -specific models, using field data on releases of UF 6 and other heavy gases. Predictions from the HGSYSTEM/UF 6 and SAIC models are within an order of magnitude of the field data, but the SAIC model overpredicts beyond an order of magnitude for a few UF 6 -specific data points. The RTM-96 model provides overpredictions within a factor of 3 for all data points beyond 400 m from the source. For one data set, however, the RTM-96 model severely underpredicts the observations within 200 m of the source. Outputs of the models are most sensitive to the meteorological parameters at large distances from the source and to certain source-specific and meteorological parameters at distances close to the source. Specific recommendations are being made to improve the applicability and usefulness of the three models and to choose a specific model to support the intended analyses. Guidance is also provided on the choice of input parameters for initial dilution, building wake effects, and distance to completion of UF 6 reaction with water

  13. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  14. Evaluation of radiobiological effects in 3 distinct biological models

    International Nuclear Information System (INIS)

    Lemos, J.; Costa, P.; Cunha, L.; Metello, L.F.; Carvalho, A.P.; Vasconcelos, V.; Genesio, P.; Ponte, F.; Costa, P.S.; Crespo, P.

    2015-01-01

    Full text of publication follows. The present work aims at sharing the process of development of advanced biological models to study radiobiological effects. Recognizing several known limitations and difficulties of the current monolayer cellular models, as well as the increasing difficulties to use advanced biological models, our group has been developing advanced biological alternative models, namely three-dimensional cell cultures and a less explored animal model (the Zebra fish - Danio rerio - which allows the access to inter-generational data, while characterized by a great genetic homology towards the humans). These 3 models (monolayer cellular model, three-dimensional cell cultures and zebra fish) were externally irradiated with 100 mGy, 500 mGy or 1 Gy. The consequences of that irradiation were studied using cellular and molecular tests. Our previous experimental studies with 100 mGy external gamma irradiation of HepG2 monolayer cells showed a slight increase in the proliferation rate 24 h, 48 h and 72 h post irradiation. These results also pointed into the presence of certain bystander effects 72 h post irradiation, constituting the starting point for the need of a more accurate analysis realized with this work. At this stage, we continue focused on the acute biological effects. Obtained results, namely MTT and clonogenic assays for evaluating cellular metabolic activity and proliferation in the in vitro models, as well as proteomics for the evaluation of in vivo effects will be presented, discussed and explained. Several hypotheses will be presented and defended based on the facts previously demonstrated. This work aims at sharing the actual state and the results already available from this medium-term project, building the proof of the added value on applying these advanced models, while demonstrating the strongest and weakest points from all of them (so allowing the comparison between them and to base the subsequent choice for research groups starting

  15. Evaluation of long-range transport models in NOVANA

    International Nuclear Information System (INIS)

    Frohn, L.M.; Brandt, J.; Christensen, J.H.; Geels, C.; Hertel, O.; Skjoeth, C.A.; Ellemann, T.

    2007-01-01

    The Lagrangian model ACDEP which has been applied in BOP/-NOVA/NOVANA during the period 1995-2004, has been replaced by the more modern Eulerian model DEHM. The new model has a number of advantages, such as a better description of the three-dimensional atmospheric transport, a larger domain, a possibility for high spatial resolution in the calculations and a more detailed description of photochemical processes and dry deposition. In advance of the replacement, the results of the two models have been compared and evaluated using European and Danish measurements. Calculations have been performed with both models applying the same meteorological and emission input, for Europe for the year 2000 as well as for Denmark for the period 2000-2003. The European measurements applied in the present evaluation are obtained through EMEP. Using these measurements DEHM and ACDEP have been compared with respect to daily and yearly mean concentrations of ammonia (NH 3 ), ammonium (NH 4 + ), the sum of NH 3 and NH 4 + (SNH), nitric acid (HNO 3 ), nitrate (NO 3 - ), the sum of HNO 3 and NO 3 - (SNO 3 ), nitrogen dioxide (NO 2 ), ozone (O 3 ), sulphur dioxide (SO 2 ) and sulphate (SO 4 2- ) as well as the hourly mean and daily maximum concentrations of O 3 . Furthermore the daily and yearly total values of precipitation and wet deposition of NH 4 + , NO 3 - and SO 4 2- have been compared for the two models. The statistical parameters applied in the comparison are correlation, bias and fractional bias. The result of the comparison with the EMEP data is, that DEHM achieves better correlation coefficients for all chemical parameters (16 parameters in total) when the daily values are analysed, and for 15 out of 16 parameters when yearly values are taken into account. With respect to the fractional bias, the results obtained with DEHM are better than the corresponding results obtained with ACDEP for 11 out of 16 chemical parameters. In general the performance of the DEHM model is at least

  16. Evaluating Behavioral Economic Models of Heavy Drinking Among College Students.

    Science.gov (United States)

    Acuff, Samuel F; Soltis, Kathryn E; Dennhardt, Ashley A; Berlin, Kristoffer S; Murphy, James G

    2018-05-14

    Heavy drinking among college students is a significant public health concern that can lead to profound social and health consequences, including alcohol use disorder. Behavioral economics posits that low future orientation and high valuation of alcohol (alcohol demand) combined with deficits in alternative reinforcement increase the likelihood of alcohol misuse (Bickel et al., 2011). Despite this, no study has examined the incremental utility of all three variables simultaneously in a comprehensive model METHOD: The current study uses structural equation modeling to test the associations between behavioral economic variables - alcohol demand (latent), future orientation (measured with a delay discounting task and the Consideration of Future Consequences (CFC) scale), and proportionate substance-related reinforcement - and alcohol consumption and problems among 393 heavy drinking college students. Two models are tested: 1) an iteration of the reinforcer pathology model that includes an interaction between future orientation and alcohol demand; and 2) an alternative model evaluating the interconnectedness of behavioral economic variables in predicting problematic alcohol use RESULTS: The interaction effects in model 1 were nonsignificant. Model 2 suggests that greater alcohol demand and proportionate substance-related reinforcement is associated with greater alcohol consumption and problems. Further, CFC was associated with alcohol-related problems and lower proportionate substance-related reinforcement but was not significantly associated with alcohol consumption or alcohol demand. Finally, greater proportionate substance-related reinforcement was associated with greater alcohol demand CONCLUSIONS: Our results support the validity of the behavioral economic reinforcer pathology model as applied to young adult heavy drinking. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  18. Comprehensive evaluation of Shahid Motahari Educational Festival during 2008-2013 based on CIPP Evaluation Model

    Directory of Open Access Journals (Sweden)

    SN Hosseini

    2014-09-01

    Full Text Available Introduction: Education quality improvement is one of the main goals of higher education. In this regard, has been provided various solutions such as holding educational Shahid Motahari annual festivals, in order to appreciate of educational process, development and innovation educational processes and procedures, preparation of calibration standards and processes of accrediting educational. The aim of this study was to comprehensive evaluating of educational Shahid Motahari festival during six periods (2008-2013 based on CIPP evaluation model. Method : This cross-sectional study was conducted among the 473 faculty members include deputies and administrators educational, administrators and faculty members of medical education development centers, members of the scientific committee and faculty member’s participants in Shahid Motahari festival from 42 universities medical sciences of Iran. Data collection based on self-report writing questionnaires. Data were analyzed by SPSS version 20 at α=0.05 significant level. Results: The subjects reported 75.13%, 65.33%, 64.5%, and 59.21 % of receivable scores of process, context, input and product, respectively. In addition, there was a direct and significant correlation between all domains . Conclusion : According to the study findings, in the evaluation and correlation of domains models, we can explicitly how to holding festivals was appropriate and the main reason for the poor evaluation in product domain is related to the problems in input and product domains.

  19. Model evaluation for travel distances 30 to 140 km

    International Nuclear Information System (INIS)

    Pendergast, M.M.

    1978-01-01

    The assessment of environmental effects from industrial pollution for travel distances over 50 km has been made largely without verification of the models used. Recently the Savannah River Laboratory, in cooperation with the Air Resources Laboratory of NOAA, has compiled a data base capable of providing this important verification. The data consists of (1) hourly release rates of 85 Kr from 62 m stacks near the center of the Savannah River Plant (SRP), Aiken, SC; (2) turbulence quality meteorological data from seven 62 m towers at SRP and the 335 m WJBF-TV tower at Beech Island, SC, located 25 km from the center of the SRP; (3) National Weather Service surface and upper air observations including Bush Field Airport, Augusta, Ga., about 30 km from the center of SRP; (4) hourly estimates of the mixing depth obtained with an acoustic sounder located on the SRP; and (5) weekly and 10-hour averaged 85 Kr air concentrations at 13 sites surrounding the SRP at distances ranging between 30 and 143 km. An earlier report has shown that annual averaged air concentrations for 1975 agree with observed values at the 13 sites within a factor of two (Pendergast, 1977). This report presents more detailed results based upon 10-hour averaged air concentrations. The models evaluated were variations of the stability wind-rose model and a segmented plume model. The meteorological models depend upon several key input variables: (1) stability category, (2) sigma/sub y/ and sigma/sub z/ curves, (3) wind velocity, and (4) mixing depth. Each of these key variables can be estimated by a variety of methods averaging processes. Several of the more commonly used methods for estimating the four key variables were evaluated using calculated and measured 85 Kr air concentrations. Estimates of error were obtained for monthly and 10-hour sampling times

  20. A Multiscale Model Evaluates Screening for Neoplasia in Barrett's Esophagus.

    Directory of Open Access Journals (Sweden)

    Kit Curtius

    2015-05-01

    Full Text Available Barrett's esophagus (BE patients are routinely screened for high grade dysplasia (HGD and esophageal adenocarcinoma (EAC through endoscopic screening, during which multiple esophageal tissue samples are removed for histological analysis. We propose a computational method called the multistage clonal expansion for EAC (MSCE-EAC screening model that is used for screening BE patients in silico to evaluate the effects of biopsy sampling, diagnostic sensitivity, and treatment on disease burden. Our framework seamlessly integrates relevant cell-level processes during EAC development with a spatial screening process to provide a clinically relevant model for detecting dysplastic and malignant clones within the crypt-structured BE tissue. With this computational approach, we retain spatio-temporal information about small, unobserved tissue lesions in BE that may remain undetected during biopsy-based screening but could be detected with high-resolution imaging. This allows evaluation of the efficacy and sensitivity of current screening protocols to detect neoplasia (dysplasia and early preclinical EAC in the esophageal lining. We demonstrate the clinical utility of this model by predicting three important clinical outcomes: (1 the probability that small cancers are missed during biopsy-based screening, (2 the potential gains in neoplasia detection probabilities if screening occurred via high-resolution tomographic imaging, and (3 the efficacy of ablative treatments that result in the curative depletion of metaplastic and neoplastic cell populations in BE in terms of the long-term impact on reducing EAC incidence.