WorldWideScience

Sample records for calibration methodology performed

  1. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    Science.gov (United States)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  2. Radionuclide calibrators performance evaluation

    International Nuclear Information System (INIS)

    Mora Ramirez, E.; Zeledon Fonseca, P.; Jimenez Cordero, M.

    2008-01-01

    Radionuclide calibrators are used to estimate accurately activity prior to administration to a patient, so it is very important that this equipment meets its performance requirements. The purpose of this paper is to compare the commercially available 'Calicheck' (Calcorp. Inc), used to assess linearity, versus the well-known source decay method, and also to show our results after performing several recommended quality control tests. The parameters that we wanted to evaluate were carried on using the Capintec CRC-15R and CRC-15 β radionuclide calibrators. The evaluated tests were: high voltage, display, zero adjust, background, reproducibility, source constancy, accuracy, precision and linearity. The first six tests were evaluated on the daily practice, here we analyzed the 2007 recorded data; and the last three were evaluated once a year. During the daily evaluation both calibrators performance were satisfactory comparing with the manufacture's requirements. The accuracy test show result within the ± 10% allowed for a field instrument. Precision performance is within the ± 1 % allowed. On the other hand, the linearity test shows that using the source decay method the relative coefficient is 0.9998, for both equipment and using the Calicheck the relative coefficient is 0.997. However, looking the percentage of error, during the 'Calicheck' test, its range goes from 0.0 % up to -25.35%, and using the source decay method, the range goes from 0.0 % up to -31.05 %, taking into account both instruments. Checking the 'Calicheck' results we can see that the results varying randomly, but using the source decay method the percentage of error increase as the source activity decrease. We conclude that both devices meet its manufactures requirements, in the case of the linearity using the decay method, decreasing the activity source, increasing the percentage of error, this may happen because of the equipment age. (author)

  3. PLEIADES ABSOLUTE CALIBRATION : INFLIGHT CALIBRATION SITES AND METHODOLOGY

    Directory of Open Access Journals (Sweden)

    S. Lachérade

    2012-07-01

    Full Text Available In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station and Oceans (Calibration over molecular scattering or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.

  4. Performance standard for dose Calibrator

    CERN Document Server

    Darmawati, S

    2002-01-01

    Dose calibrator is an instrument used in hospitals to determine the activity of radionuclide for nuclear medicine purposes. International Electrotechnical Commission (IEC) has published IEC 1303:1994 standard that can be used as guidance to test the performance of the instrument. This paper briefly describes content of the document,as well as explains the assessment that had been carried out to test the instrument accuracy in Indonesia through intercomparison measurement.Its is suggested that hospitals acquire a medical physicist to perform the test for its dose calibrator. The need for performance standard in the form of Indonesia Standard is also touched.

  5. XRD alignment, calibration and performance

    International Nuclear Information System (INIS)

    Davy, L.

    2002-01-01

    Full text: The quality of any diffractometer system is very much dependent on the alignment, calibration and performance. The three subjects are very much related. Firstly, you must know how to carry out the full diffractometer alignment. XRD alignment is easy once you know how. The presentation will show you step by step to carry out the full alignment. Secondly, you need to know how to calibrate the diffractometer system. The presentation will show you how to calibrate the goniometer, detector etc. Thirdly, to prove the system is working within the manufacturer specification. The presentation will show you how to carry out the resolution, reproducibility and linearity test. Copyright (2002) Australian X-ray Analytical Association Inc

  6. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry

    International Nuclear Information System (INIS)

    Brady, S. L.; Kaufman, R. A.

    2012-01-01

    Purpose: The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ∼25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. Methods: The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Results: Calibration precision was measured to be better than 5%–7%, 3%–5%, and 2%–4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy −1 versus the CT scatter phantom 29.2 ± 1.0 mV cGy −1 and FIA with x-ray 29.9 ± 1.1 mV cGy −1 methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ∼3000 mV. Conclusions: The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the

  7. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry.

    Science.gov (United States)

    Brady, S L; Kaufman, R A

    2012-06-01

    The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12

  8. SMART performance analysis methodology

    International Nuclear Information System (INIS)

    Lim, H. S.; Kim, H. C.; Lee, D. J.

    2001-04-01

    To ensure the required and desired operation over the plant lifetime, the performance analysis for the SMART NSSS design is done by means of the specified analysis methodologies for the performance related design basis events(PRDBE). The PRDBE is an occurrence(event) that shall be accommodated in the design of the plant and whose consequence would be no more severe than normal service effects of the plant equipment. The performance analysis methodology which systematizes the methods and procedures to analyze the PRDBEs is as follows. Based on the operation mode suitable to the characteristics of the SMART NSSS, the corresponding PRDBEs and allowable range of process parameters for these events are deduced. With the developed control logic for each operation mode, the system thermalhydraulics are analyzed for the chosen PRDBEs using the system analysis code. Particularly, because of different system characteristics of SMART from the existing commercial nuclear power plants, the operation mode, PRDBEs, control logic, and analysis code should be consistent with the SMART design. This report presents the categories of the PRDBEs chosen based on each operation mode and the transition among these and the acceptance criteria for each PRDBE. It also includes the analysis methods and procedures for each PRDBE and the concept of the control logic for each operation mode. Therefore this report in which the overall details for SMART performance analysis are specified based on the current SMART design, would be utilized as a guide for the detailed performance analysis

  9. Generic methodology for calibrating profiling nacelle lidars

    DEFF Research Database (Denmark)

    Borraccino, Antoine; Courtney, Michael; Wagner, Rozenn

    Improving power performance assessment by measuring at different heights has been demonstrated using ground-based profiling LIDARs. More recently, nacelle-mounted lidars studies have shown promising capabilities to assess power performance. Using nacelle lidars avoids the erection of expensive me...

  10. Calibration methodology for energy management system of a plug-in hybrid electric vehicle

    International Nuclear Information System (INIS)

    Duan, Benming; Wang, Qingnian; Zeng, Xiaohua; Gong, Yinsheng; Song, Dafeng; Wang, Junnian

    2017-01-01

    Highlights: • Calibration theory of EMS is proposed. • A comprehensive evaluating indicator is constructed by radar chart method. • Optimal Latin hypercube design algorithm is introduced to obtain training data. • An approximation model is established by using a RBF neural network. • Offline calibration methodology improves the actual calibration efficiency. - Abstract: This paper presents a new analytical calibration method for energy management strategy designed for a plug-in hybrid electric vehicle. This method improves the actual calibration efficiency to reach a compromise among the conflicting calibration requirements (e.g. emissions and economy). A comprehensive evaluating indicator covering emissions and economic performance is constructed by using a radar chart method. A radial basis functions (RBFs) neural network model is proposed to establish a precise model among control parameters and the comprehensive evaluation indicator. The optimal Latin hypercube design is introduced to obtain the experimental data to train the RBFs neural network model. And multi-island genetic algorithm is used to solve the optimization model. Finally, an offline calibration example is conducted. Results validate the effectiveness of the proposed calibration approach in improving vehicle performance and calibration efficiency.

  11. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  12. Energy performance assessment methodology

    Energy Technology Data Exchange (ETDEWEB)

    Platzer, W.J. [Fraunhofer Inst. for Solar Energy Systems, Freiburg (Germany)

    2006-01-15

    The energy performance of buildings are intimately connected to the energy performance of building envelopes. The better we understand the relation between the quality of the envelope and the energy consumption of the building, the better we can improve both. We have to consider not only heating but all service energies related to the human comfort in the building, such as cooling, ventilation, lighting as well. The complexity coming from this embracing approach is not to be underestimated. It is less and less possible to realted simple characteristic performance indicators of building envelopes (such as the U-value) to the overall energy performance. On the one hand much more paramters (e.g. light transmittance) come into the picture we have to assess the product quality in a multidimensional world. Secondly buildings more and more have to work on a narrow optimum: For an old, badly insulated building all solar gains are useful for a high-performance building with very good insulation and heat recovery systems in the ventilation overheating becomes more likely. Thus we have to control the solar gains, and sometimes we need high gains, sometimes low ones. And thirdly we see that the technology within the building and the user patterns and interactions as well influence the performance of a building envelope. The aim of this project within IEA Task27 was to improve our knowledge on the complex situation and also to give a principal approach how to assess the performance of the building envelope. The participants have contributed to this aim not pretending that we have reached the end. (au)

  13. CryoSat SIRAL Calibration and Performance

    Science.gov (United States)

    Fornari, Marco; Scagliola, Michele; Tagliani, Nicolas; Parrinello, Tommaso

    2013-04-01

    The main payload of CryoSat is a Ku band pulse-width limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is a significant improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed, including both an internal and external calibration. The internal calibration monitors the instrument impulse response and the transfer function, like traditional altimeters. In addition to that, the interferometer requires a special calibration developed ad hoc for SIRAL. The external calibration is performed with the use of a ground transponder, located in Svalbard, which receives SIRAL signal and sends the echo back to the satellite. Internal calibration data are processed on ground by the CryoSat Instrument Processing Facility (IPF1) and then applied to the science data. By April 2013, almost 3 years of calibration data will be available, which will be shown in this poster. The external calibration (transponder) data are processed and analyzed independently from the operational chain. The use of an external transponder has been very useful to determine instrument performance and for the tuning of the on-ground processor. This poster presents the transponder results in terms of range noise and datation error.

  14. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  15. Development of a calibration methodology and tests of kerma area product meters

    International Nuclear Information System (INIS)

    Costa, Nathalia Almeida

    2013-01-01

    The quantity kerma area product (PKA) is important to establish reference levels in diagnostic radiology exams. This quantity can be obtained using a PKA meter. The use of such meters is essential to evaluate the radiation dose in radiological procedures and is a good indicator to make sure that the dose limit to the patient's skin doesn't exceed. Sometimes, these meters come fixed to X radiation equipment, which makes its calibration difficult. In this work, it was developed a methodology for calibration of PKA meters. The instrument used for this purpose was the Patient Dose Calibrator (PDC). It was developed to be used as a reference to check the calibration of PKA and air kerma meters that are used for dosimetry in patients and to verify the consistency and behavior of systems of automatic exposure control. Because it is a new equipment, which, in Brazil, is not yet used as reference equipment for calibration, it was also performed the quality control of this equipment with characterization tests, the calibration and an evaluation of the energy dependence. After the tests, it was proved that the PDC can be used as a reference instrument and that the calibration must be performed in situ, so that the characteristics of each X-ray equipment, where the PKA meters are used, are considered. The calibration was then performed with portable PKA meters and in an interventional radiology equipment that has a PKA meter fixed. The results were good and it was proved the need for calibration of these meters and the importance of in situ calibration with a reference meter. (author)

  16. HENC performance evaluation and plutonium calibration

    International Nuclear Information System (INIS)

    Menlove, H.O.; Baca, J.; Pecos, J.M.; Davidson, D.R.; McElroy, R.D.; Brochu, D.B.

    1997-10-01

    The authors have designed a high-efficiency neutron counter (HENC) to increase the plutonium content in 200-L waste drums. The counter uses totals neutron counting, coincidence counting, and multiplicity counting to determine the plutonium mass. The HENC was developed as part of a Cooperative Research and Development Agreement between the Department of Energy and Canberra Industries. This report presents the results of the detector modifications, the performance tests, the add-a-source calibration, and the plutonium calibration at Los Alamos National Laboratory (TA-35) in 1996

  17. Calibration and performance of the CHORUS calorimeter

    International Nuclear Information System (INIS)

    Buontempo, S.; Capone, A.; Cocco, A.G.; De Pedis, D.; Di Capua, E.; Dore, U.; Ereditato, A.; Ferroni, M.; Fiorillo, G.; Loverre, P.F.; Luppi, C.; Macina, D.; Marchetti-Stasi, F.; Mazzoni, M.A.; Migliozzi, P.; Palladino, V.; Piredda, G.; Ricciardi, S.; Righini, P.P.; Saitta, B.; Santacesaria, R.; Strolin, P.; Zucchelli, P.

    1995-01-01

    A high resolution calorimeter has been built for CHORUS, an experiment which searches for ν μ →ν τ oscillation in the CERN neutrino beam. Aim of the calorimeter is to measure the energy and direction of hadronic showers produced in interactions of the neutrinos in a nuclear emulsion target and to track through-going muons. It is a longitudinally segmented sampling device made of lead and scintillating fibers or strips. This detector has been exposed to beams of pions and electrons of defined momentum for calibration. The method used for energy calibration and results on the calorimeter performance are reported. (orig.)

  18. Methodology for the digital calibration of analog circuits and systems with case studies

    CERN Document Server

    Pastre, Marc

    2006-01-01

    Methodology for the Digital Calibration of Analog Circuits and Systems shows how to relax the extreme design constraints in analog circuits, allowing the realization of high-precision systems even with low-performance components. A complete methodology is proposed, and three applications are detailed. To start with, an in-depth analysis of existing compensation techniques for analog circuit imperfections is carried out. The M/2+M sub-binary digital-to-analog converter is thoroughly studied, and the use of this very low-area circuit in conjunction with a successive approximations algorithm for digital compensation is described. A complete methodology based on this compensation circuit and algorithm is then proposed. The detection and correction of analog circuit imperfections is studied, and a simulation tool allowing the transparent simulation of analog circuits with automatic compensation blocks is introduced. The first application shows how the sub-binary M/2+M structure can be employed as a conventional di...

  19. Comparison of calibration strategies for optical 3D scanners based on structured light projection using a new evaluation methodology

    Science.gov (United States)

    Bräuer-Burchardt, Christian; Ölsner, Sandy; Kühmstedt, Peter; Notni, Gunther

    2017-06-01

    In this paper a new evaluation strategy for optical 3D scanners based on structured light projection is introduced. It can be used for the characterization of the expected measurement accuracy. Compared to the procedure proposed in the VDI/VDE guidelines for optical 3D measurement systems based on area scanning it requires less effort and provides more impartiality. The methodology is suitable for the evaluation of sets of calibration parameters, which mainly determine the quality of the measurement result. It was applied to several calibrations of a mobile stereo camera based optical 3D scanner. The performed calibrations followed different strategies regarding calibration bodies and arrangement of the observed scene. The results obtained by the different calibration strategies are discussed and suggestions concerning future work on this area are given.

  20. Innovative methodology for intercomparison of radionuclide calibrators using short half-life in situ prepared radioactive sources

    International Nuclear Information System (INIS)

    Oliveira, P. A.; Santos, J. A. M.

    2014-01-01

    Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as 99m Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a careful analysis of the methodology, for the case of 99m Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using 99m Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for 99m Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half-life radionuclides

  1. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E

    2015-01-01

    This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3 He-filled tube is obtained by using this methodology with respect to previous calibration methods. (paper)

  2. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    Energy Technology Data Exchange (ETDEWEB)

    Tarifeño-Saldivia, Ariel, E-mail: atarifeno@cchen.cl, E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo [Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago (Chile); Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago (Chile); Mayer, Roberto E. [Instituto Balseiro and Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Universidad Nacional de Cuyo, San Carlos de Bariloche R8402AGP (Argentina)

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  3. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods

  4. Postal auditing methodology used to find out the performance of high rate brachytherapy equipment

    International Nuclear Information System (INIS)

    Morales, J.A.; Campa, R.

    1998-01-01

    This work describes results from a methodology implemented at the Secondary Laboratory for Dosimetric Calibration at CPHR used to check the brachytherapy performance at high doses rate using Cesium 137 or cobalt 60 sources

  5. Application of methodology for calibration of instruments utilized in dosimetry of high energy beams, for radiodiagnosis

    International Nuclear Information System (INIS)

    Potiens, Maria P.A.; Caldas, Linda V.E.

    2000-01-01

    The radiation qualities recommended by the IEC 1267 standard for the calibration of instruments used in diagnostic radiology measurements were established using a neo-diagnomax X-ray system (125 kV). The RQR radiation qualities are recommended to test ionization chambers used in non attenuated beams, and the RQA radiation qualities in attenuated beams (behind a phantom). To apply the methodology, 6 ionization chambers commonly used in diagnostic radiology were tested. The higher energy dependence (17%) was obtained for an ionization chamber recommended for mammography beams, that is not the case of the X radiation system used in this work. The other ionization chambers presented good performance in terms of energy (maximum of 5%), therefore within the limits of the international recommendations for this kind of instrument. (author)

  6. Technology Performance Level Assessment Methodology.

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Jesse D.; Bull, Diana L; Malins, Robert Joseph; Costello, Ronan Patrick; Aurelien Babarit; Kim Nielsen; Claudio Bittencourt Ferreira; Ben Kennedy; Kathryn Dykes; Jochem Weber

    2017-04-01

    The technology performance level (TPL) assessments can be applied at all technology development stages and associated technology readiness levels (TRLs). Even, and particularly, at low TRLs the TPL assessment is very effective as it, holistically, considers a wide range of WEC attributes that determine the techno-economic performance potential of the WEC farm when fully developed for commercial operation. The TPL assessment also highlights potential showstoppers at the earliest possible stage of the WEC technology development. Hence, the TPL assessment identifies the technology independent “performance requirements.” In order to achieve a successful solution, the entirety of the performance requirements within the TPL must be considered because, in the end, all the stakeholder needs must be achieved. The basis for performing a TPL assessment comes from the information provided in a dedicated format, the Technical Submission Form (TSF). The TSF requests information from the WEC developer that is required to answer the questions posed in the TPL assessment document.

  7. VIIRS reflective solar bands on-orbit calibration and performance: a three-year update

    Science.gov (United States)

    Sun, Junqiang; Wang, Menghua

    2014-11-01

    The on-orbit calibration of the reflective solar bands (RSBs) of VIIRS and the result from the analysis of the up-to-date 3 years of mission data are presented. The VIIRS solar diffuser (SD) and lunar calibration methodology are discussed, and the calibration coefficients, called F-factors, for the RSBs are given for the latest reincarnation. The coefficients derived from the two calibrations are compared and the uncertainties of the calibrations are discussed. Numerous improvements are made, with the major improvement to the calibration result come mainly from the improved bidirectional reflectance factor (BRF) of the SD and the vignetting functions of both the SD screen and the sun-view screen. The very clean results, devoid of many previously known noises and artifacts, assures that VIIRS has performed well for the three years on orbit since launch, and in particular that the solar diffuser stability monitor (SDSM) is functioning essentially without flaws. The SD degradation, or H-factors, for most part shows the expected decline except for the surprising rise on day 830 lasting for 75 days signaling a new degradation phenomenon. Nevertheless the SDSM and the calibration methodology have successfully captured the SD degradation for RSB calibration. The overall improvement has the most significant and direct impact on the ocean color products which demands high accuracy from RSB observations.

  8. Methodology for NDA performance assessment

    International Nuclear Information System (INIS)

    Cuypers, M.; Franklin, M.; Guardini, S.

    1986-01-01

    In the framework of the RandD programme of the Joint Research Centre of the Commission of the European Communities, a considerable effort is being dedicated to performance assessment of NDA techniques taking account of field conditions. By taking account of field conditions is meant measurement samples of the size encountered in practice and training which allows inspectors to design cost efficient verification plans for the real situations encountered in the field. Special laboratory facilities referred to as PERLA are being constructed for this purpose. These facilities will be used for measurement experiments and for training. In this paper, performance assessment is discussed under the headings of measurement capability and in-field effectiveness. Considerable emphasis is given to the role of method specific measurement error models. The authors outline the advantages of giving statistical error models a sounder basis in the physical phenomenology of the measurement method

  9. Methodology for performing surveys for fixed contamination

    International Nuclear Information System (INIS)

    Durham, J.S.; Gardner, D.L.

    1994-10-01

    This report describes a methodology for performing instrument surveys for fixed contamination that can be used to support the release of material from radiological areas, including release to controlled areas and release from radiological control. The methodology, which is based on a fast scan survey and a series of statistical, fixed measurements, meets the requirements of the U.S. Department of Energy Radiological Control Manual (RadCon Manual) (DOE 1994) and DOE Order 5400.5 (DOE 1990) for surveys for fixed contamination and requires less time than a conventional scan survey. The confidence interval associated with the new methodology conforms to the draft national standard for surveys. The methodology that is presented applies only to surveys for fixed contamination. Surveys for removable contamination are not discussed, and the new methodology does not affect surveys for removable contamination

  10. CryoSat-2 SIRAL Calibration and Performance

    Science.gov (United States)

    Fornari, M.; Scagliola, M.; Tagliani, N.; Parrinello, T.

    2012-12-01

    The main payload of CryoSat-2 is a Ku band pulse-width limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is a significant improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed, including both an internal and external calibration. The internal calibration monitors the instrument impulse response and the transfer function, like traditional altimeters. In addition to that, the interferometer requires a special calibration developed ad hoc for SIRAL. The external calibration is performed with the use of a ground transponder, located in Svalbard, which receives SIRAL signal and sends the echo back to the satellite. Internal calibration data are processed on ground by the CryoSat-2 Instrument Processing Facility (IPF1) and then applied to the science data. In December 2012, two and a half years of calibration data will be available, which will be shown in this poster. The external calibration (transponder) data are processed and analyzed independently from the operational chain. The use of an external transponder has been very useful to determine instrument performance and for the tuning of the on-ground processor. This poster presents the transponder results in terms of range noise and datation error.

  11. Personal dosimetry service of TECNATOM: measurement system and methodology of calibration

    International Nuclear Information System (INIS)

    Marchena, Paloma; Bravo, Borja

    2008-01-01

    Full text: The implementation of a new integrated and practical working tool called ALEDIN within the Personal Dosimetry Service (PDS) of TECNATOM, have harmonized the methodology for the counting acquisition, detector calibration and data analysis using a friendly Windows (registered mark) environment. The knowledge of this methodology, due to the fact that is the final product of a R and D project, will help the users and the Regulatory Body for a better understanding of the internal activity measurement in individuals, allowing a more precise error identification and correction, and improving the whole process of the internal dosimetry. The development and implementation of a new calibration system of the whole body counters using NaI (Tl) detectors and the utilization of a new humanoid anthropometric phantom, BOMAB type, with a uniform radioactive source distributions, allow a better energy and activity calibration for different counting geometries covering a wide range of gamma spectra from low energies, less than 100 keV to about 2000 keV for the high energies spectra. This new calibration methodology implied the development of an improved system for the determination of the isotopic activity. This new system has been integrated in a Windows (registered mark) environment, applicable for counting acquisition and data analysis in the whole body counters WBC in cross connection with the INDAC software, which allow the interpretation of the measured activity as committed effective dose following all the new ICRP recommendations and dosimetric models for internal dose and bioassay measurements. (author)

  12. How Six Sigma Methodology Improved Doctors' Performance

    Science.gov (United States)

    Zafiropoulos, George

    2015-01-01

    Six Sigma methodology was used in a District General Hospital to assess the effect of the introduction of an educational programme to limit unnecessary admissions. The performance of the doctors involved in the programme was assessed. Ishikawa Fishbone and 5 S's were initially used and Pareto analysis of their findings was performed. The results…

  13. Methodology for quantitative evaluation of diagnostic performance

    International Nuclear Information System (INIS)

    Metz, C.

    1981-01-01

    Of various approaches that might be taken to the diagnostic performance evaluation problem, Receiver Operating Characteristic (ROC) analysis holds great promise. Further development of the methodology for a unified, objective, and meaningful approach to evaluating the usefulness of medical imaging procedures is done by consideration of statistical significance testing, optimal sequencing of correlated studies, and analysis of observer performance

  14. ATLAS Tile Calorimeter time calibration, monitoring and performance

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00075913; The ATLAS collaboration

    2016-01-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. This sampling device is made of plastic scintillating tiles alternated with iron plates and its response is calibrated to electromagnetic scale by means of several dedicated calibration systems. The accurate time calibration is important for the energy reconstruction, non-collision background removal as well as for specific physics analyses. The initial time calibration with so-called splash events and subsequent fine-tuning with collision data are presented. The monitoring of the time calibration with laser system and physics collision data is discussed as well as the corrections for sudden changes performed still before the recorded data are processed for physics analyses. Finally, the time resolution as measured with jets and isolated muons particles is presented.

  15. Performance evaluation of commercial radionuclide calibrators in Indonesian hospitals

    International Nuclear Information System (INIS)

    Candra, Hermawan; Marsoem, Pujadi; Wurdiyanto, Gatot

    2012-01-01

    Dose calibrator is one of the supporting equipments in the field of nuclear medicine. At the hospitals, dose calibrator is used for activity measurement of radiopharmaceutical before it is administered to patients. Comparison of activity measurements of 131 I and 99m Tc with dose calibrators was organized in Indonesia during 2007–2010 with the the aim of obtaining information dose calibrator performance in the hospitals. Seven Indonesian hospitals participated in this comparison. The measurement results were evaluated using the E n criteria. The result presented in this paper facilitated the evaluation of dose calibrator performance at several hospitals. - Highlights: ► National comparisons of 131 I and 99m Tc radionuclides in Indonesian hospitals. ► Standardization using a Centronic IG11/A20 4πγ Ionization Chamber and participants using commercial radionuclide calibrators. ► Performance radionuclide calibrator in nuclear medicine in Indonesia. ► Measurement of activity of 99m Tc and 131 I was found satisfactory.

  16. Methodological approach to organizational performance improvement process

    OpenAIRE

    Buble, Marin; Dulčić, Želimir; Pavić, Ivan

    2017-01-01

    Organizational performance improvement is one of the fundamental enterprise tasks. This especially applies to the case when the term “performance improvement” implies efficiency improvement measured by indicators, such as ROI, ROE, ROA, or ROVA/ROI. Such tasks are very complex, requiring implementation by means of project management. In this paper, the authors propose a methodological approach to improving the organizational performance of a large enterprise.

  17. Methodological approach to organizational performance improvement process

    Directory of Open Access Journals (Sweden)

    Marin Buble

    2001-01-01

    Full Text Available Organizational performance improvement is one of the fundamental enterprise tasks. This especially applies to the case when the term “performance improvement” implies efficiency improvement measured by indicators, such as ROI, ROE, ROA, or ROVA/ROI. Such tasks are very complex, requiring implementation by means of project management. In this paper, the authors propose a methodological approach to improving the organizational performance of a large enterprise.

  18. The Geostationary Lightning Mapper: Its Performance and Calibration

    Science.gov (United States)

    Christian, H. J., Jr.

    2015-12-01

    The Geostationary Lightning Mapper (GLM) has been developed to be an operational instrument on the GOES-R series of spacecraft. The GLM is a unique instrument, unlike other meteorological instruments, both in how it operates and in the information content that it provides. Instrumentally, it is an event detector, rather than an imager. While processing almost a billion pixels per second with 14 bits of resolution, the event detection process reduces the required telemetry bandwidth by almost 105, thus keeping the telemetry requirements modest and enabling efficient ground processing that leads to rapid data distribution to operational users. The GLM was designed to detect about 90 percent of the total lightning flashes within its almost hemispherical field of view. Based on laboratory calibration, we expect the on-orbit detection efficiency to be closer to 85%, making it the highest performing, large area coverage total lightning detector. It has a number of unique design features that will enable it have near uniform special resolution over most of its field of view and to operate with minimal impact on performance during solar eclipses. The GLM has no dedicated on-orbit calibration system, thus the ground-based calibration provides the bases for the predicted radiometric performance. A number of problems were encountered during the calibration of Flight Model 1. The issues arouse from GLM design features including its wide field of view, fast lens, the narrow-band interference filters located in both object and collimated space and the fact that the GLM is inherently a event detector yet the calibration procedures required both calibration of images and events. The GLM calibration techniques were based on those developed for the Lightning Imaging Sensor calibration, but there are enough differences between the sensors that the initial GLM calibration suggested that it is significantly more sensitive than its design parameters. The calibration discrepancies have

  19. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    Science.gov (United States)

    Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.

    2014-11-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  20. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    International Nuclear Information System (INIS)

    Fonseca, T C Ferreira; Vanhavere, F; Bogaerts, R; Hunt, John

    2014-01-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium. (paper)

  1. Methodological approach to strategic performance optimization

    OpenAIRE

    Hell, Marko; Vidačić, Stjepan; Garača, Željko

    2009-01-01

    This paper presents a matrix approach to the measuring and optimization of organizational strategic performance. The proposed model is based on the matrix presentation of strategic performance, which follows the theoretical notions of the balanced scorecard (BSC) and strategy map methodologies, initially developed by Kaplan and Norton. Development of a quantitative record of strategic objectives provides an arena for the application of linear programming (LP), which is a mathematical tech...

  2. Calibration and performance testing of electronic personal dosimeters (EPD)

    International Nuclear Information System (INIS)

    Banaga, H.A.

    2008-04-01

    In modern radiation protection practices, active personal dosimeters are becoming absolutely necessary operational tools for satisfying the ALARA principle. The aim of this work was to carry out calibration and performance testing of ten electronic personal dosimeters (EPD) used for the individual monitoring. The EPDs were calibrated in terms of operation radiation protection quantity, personal dose equivalent, Hp (10). Calibrations were carried out at three of x-ray beam qualities described in ISO 4037 namely 60, 100 and 150 kV in addition to Cs-137 gamma ray quality. The calibrations were performed using polymethylmethacrylate (PMMA) phantom with dimensions 20*20*15 cm 3 . Conversion coefficient Hp (10)/K air for the phantom was also calculated. The response and linearity of the dosimeter at the specified energies were also tested. The EPDs tested showed that the calibration coefficient ranged from 0.60 to 1.31 and an equivalent response for the specified energies that ranged from 0.76 to 1.67. The study demonstrated the possibility of using non standard phantom for calibrating dosimeters used for individual monitoring. The dosimeters under study showed a good response in all energies except the response in quality 100 kV. The linearity of the dosimeters was within ±15%, with the exception of the quality 100 kV where this limit was exceeded.(Author)

  3. Study of the performance of diagnostic radiology instruments during calibration

    International Nuclear Information System (INIS)

    Freitas, Rodrigo N. de; Vivolo, Vitor; Potiens, Maria da Penha A.

    2008-01-01

    Full text: The instruments used in diagnostic radiology measurements represent 8 % of the tested instruments by the calibration laboratory of IPEN annually (approximately 1600 in 2007). Considering that the calibration of this kind of instrument is performed biannually it is possible to conclude that almost 300 instruments are being used to measure the air kerma in diagnostic radiology clinics to determine the in beam values (in front of the patient), attenuated measurements (behind the patient) and scattered radiation. This work presents the results of the calibration of the instruments used in mammography, computed tomography, dental and conventional diagnostic radiology dosimetry, performed during the period of 2005 to 2007. Their performances during the calibrations measurements were evaluated. Although at the calibration laboratory there are three available series of radiation quality to this type of calibration (RQR, N and M, according to standards IEC 61267 and ISO 4037-1.), the applications can be assorted (general radiology, computed tomography, mammography, radiation protection and fluoroscopy). Depending on its design and behaviour , one kind of instrument can be used for one or more type of applications. The instruments normally used for diagnostic radiology measurements are ionization chambers with volumes varying from 3 to 1800 cm 3 , and can be cylindrical, spherical or plane parallel plates kind. They usually are sensitive to photon particles, with energies greater than 15 keV and can be used up to 1200 keV. In this work they were tested in X radiation fields from 25 to 150 kV, in specific qualities depending on the utilization of the instrument. The calibration results of 390 instruments received from 2005 to 2007 were analyzed. About 20 instruments were not able to be calibrated due to bad functioning. The calibration coefficients obtained were between 0.88 and 1.24. The uncertainties were always less than ± 3.6% to instruments used in scattered

  4. Calibration

    International Nuclear Information System (INIS)

    Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.

    1981-01-01

    Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%

  5. Calibration of detectors type CR-39 for methodology implementation for Radon-222 determination in CRCN-NE, Brazil

    International Nuclear Information System (INIS)

    Silva, Karolayne E.M. da; Santos, Mariana L. de O.; Amaral, Déric S. do; Vilela, Eldice C.; França, Elvis J. de; Hazin, Clovis A.; Farias, Emerson E.G. de

    2017-01-01

    Radon-222 is a radioactive gas, a product of the decay of uranium-238, which emits alpha particles and represents more than 50% of the dose of natural radiation received by the population. Therefore, monitoring of this gas is essential. For indoor measurement, solid state detectors can be used, the most common of which is CR-39. For monitoring using CR-39, alpha particles, generated by radon-222 and the daughter radionuclides, strike the surface of the detector and generate traces. To relate the trace density per exposure area in environments with unknown activity concentration, it is necessary to determine the calibration factor. The objective of this study was to calibrate CR-39 type detectors for the implementation of the radon determination methodology in Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE of Brazilian Nuclear Energy Commission - CNEN. In order to determine the CR-39 calibration factor, 19 exposures of the detectors were performed in the CRCN-NE calibration chamber (RN1-CRCN) at an activity of 5.00 kBq m -3 , with the exposure time varying from 24 to 850 hours. For the detection of the detectors, sodium hydroxide was used in a thermostat bath at 90 ° C for 5 hours. The count of number of traits per unit of field was performed with the aid of optical microscopy with an increase of 100 times, being read 30 fields per dosimeters. As a result, the calibration factor was obtained, and the linear response of the trace density as a function of exposure was observed. The results allow the use of CR-39 in the determination of radon-222 by CRCN-NE

  6. An overview of performance assessment methodology

    International Nuclear Information System (INIS)

    Hongnian Jow

    2010-01-01

    The definition of performance assessment (PA) within the context of a geologic repository program is a post-closure safety assessment; a system analysis of hazards associated with the facility and the ability of the site and the design of the facility to provide for the safety functions. For the last few decades, PA methodology bas been developed and applied to different waste disposal programs around the world. PA has been used in the safety analyses for waste disposal repositories for low-level waste, intermediate level waste, and high-level waste including spent nuclear fuels. This paper provides an overview of the performance assessment methodology and gives examples of its applications for the Yucca Mountain Project. (authors)

  7. Methodology for calibration of detector of NaI (TI)) 3 ' X 3 ' for in vivo measurements of patients with hyperthyroidism undergoing to radioiodotherapy

    International Nuclear Information System (INIS)

    Carvalho, Carlaine B.; Lacerda, Isabelle V.B.; Oliveira, Mercia L.; Hazin, Clovis A.; Lima, Fabiana F.

    2013-01-01

    The aim of this study is to establish the methodology for calibration of the detection system to be used in determining the therapeutic activity of 131 I required to release desired absorbed dose in the thyroid gland . This step is critical to the development of a protocol for individualized doses. The system consists of a detector of NaI (Tl ) 3'x3' coupled to software Genie 2000. We used the calibration sources of 60 Co , 137 Cs and 133 Ba. We obtained the straight calibration system, with sources 60 Co and 137 Cs. Subsequently , the detector was calibrated using a thyroid phantom-neck designed and produced by the IRD / CNEN with known activity of 133 Ba standard solution containing 18.7 kBq (on 09/24/12) evenly distributed. He was also calibrated with other thyroid- neck phantom model 3108 manufactured by Searle Radigraphics Ind., containing a liquid source of 131 I ( 7.7 MBq ). Five measurements were performed during 5 minutes for three different distances detector-simulator and calculated the corresponding calibration factors . The values of the calibration factors found for the simulator made by IRD and Searle Radigraphics Ind. for the distances 20, 25 and 30 cm were 0.35 , 0.24, 0.18, 0.15 , 0.11, 0, 09 , respectively. With the detection system properly calibrated and the calibration factors established, the technique is suitable for the evaluation of diagnostic activities of 131 I incorporated by hyperthyroid patients. (author)

  8. Methodology for calibration of detector of Nal (TI) 3 'X 3' for measurements in vivo of patients with hyperthyroidism undergoing radioiodine therapy

    International Nuclear Information System (INIS)

    Carvalho, Carlaine B.; Lacerda, Isabelle V.B.; Oliveira, Mercia L.; Hazin, Clovis A.; Lima, Fabiana F.

    2013-01-01

    The aim of this study is to establish the methodology for calibration of the detection system to be used in determining the therapeutic activity of 131 I required to release the desired absorbed dose in the thyroid. This step is critical to the development of a protocol for individualized doses. The system consists of a detector of NaI (Tl ) 3 'x3' coupled to Genie 2000 software. The calibration sources of 60 CO, 137 Cs and 133 Ba were used. Obtained straight calibration system, with 60 CO and 137 Cs sources. Subsequently, the detector was calibrated using a simulator -neck thyroid designed and produced by the IRD/CNEN with known standard solution containing 18.7 kBq 133 Ba activity (in 12/09/24) evenly distributed. He was also calibrated with other thyroid - neck phantom Model 3108 manufactured by Searle Radigraphics Ind., containing a net source of 131 I ( 7.7 MBq ). Five measurements of five minutes were realized for three different distances detector simulator, and the respective calculated calibration factors was performed to three. The values of the calibration factors found for the simulator manufactured by IRD and the Searle Radigraphics Ind. for the distances 20 , 25 and 30cm were 0.35, 0.24, 0.18, and 0.15, 0.11, 0.09, respectively. With the detection system properly calibrated and the calibration factors established, the technique is suitable for the evaluation of diagnostic activity of 131 I incorporated by hyperthyroidism

  9. Comparison of Two Methodologies for Calibrating Satellite Instruments in the Visible and Near-Infrared

    Science.gov (United States)

    Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Butler, James J.; Schwarting, Thomas; Turpie, Kevin; Moyer, David; DeLuccia, Frank; Moeller, Christopher

    2015-01-01

    Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance responsivity in a two-step method. In the first step, the relative spectral response (RSR) of the instrument is determined using a nearly monochromatic light source such as a lamp-illuminated monochromator. These sources do not typically fill the field-of-view of the instrument nor act as calibrated sources of light. Consequently, they only provide a relative (not absolute) spectral response for the instrument. In the second step, the instrument views a calibrated source of broadband light, such as a lamp-illuminated integrating sphere. The RSR and the sphere absolute spectral radiance are combined to determine the absolute spectral radiance responsivity (ASR) of the instrument. More recently, a full-aperture absolute calibration approach using widely tunable monochromatic lasers has been developed. Using these sources, the ASR of an instrument can be determined in a single step on a wavelength-by-wavelength basis. From these monochromatic ASRs, the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as lamp-illuminated integrating spheres. In this work, the traditional broadband source-based calibration of the Suomi National Preparatory Project (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) sensor is compared with the laser-based calibration of the sensor. Finally, the impact of the new full-aperture laser-based calibration approach on the on-orbit performance of the sensor is considered.

  10. Ultra wideband antennas design, methodologies, and performance

    CERN Document Server

    Galvan-Tejada, Giselle M; Jardón Aguilar, Hildeberto

    2015-01-01

    Ultra Wideband Antennas: Design, Methodologies, and Performance presents the current state of the art of ultra wideband (UWB) antennas, from theory specific for these radiators to guidelines for the design of omnidirectional and directional UWB antennas. Offering a comprehensive overview of the latest UWB antenna research and development, this book:Discusses the developed theory for UWB antennas in frequency and time domainsDelivers a brief exposition of numerical methods for electromagnetics oriented to antennasDescribes solid-planar equivalen

  11. Performance improvement using methodology: case study.

    Science.gov (United States)

    Harmelink, Stacy

    2008-01-01

    The department of radiology at St. Luke's Regional Medical Center in Sioux City, IA implemented meaningful workflow changes for reducing patient wait times and, at the same time, improved customer and employee satisfaction scores. Lean methodology and the 7 Deadly Wastes, along with small group interaction, was used to evaluate and change the process of a patient waiting for an exam in the radiology department. The most important key to the success of a performance improvement project is the involvement of staff.

  12. Performance Assessment and Geometric Calibration of RESOURCESAT-2

    Science.gov (United States)

    Radhadevi, P. V.; Solanki, S. S.; Akilan, A.; Jyothi, M. V.; Nagasubramanian, V.

    2016-06-01

    Resourcesat-2 (RS-2) has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs). These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.

  13. Calibration methodology for instruments utilized in X radiation beams, diagnostic level

    Energy Technology Data Exchange (ETDEWEB)

    Penha, M. da; Potiens, A.; Caldas, L.V.E. [Instituto de Pesquisas Energeticas e Nucleares, Comissao Nacional de Energia Nuclear, Sao Paulo (Brazil)]. E-mail: mppalbu@ipen.br

    2004-07-01

    Methodologies for the calibration of diagnostic radiology instruments were established at the Calibration Laboratory of IPEN. The methods may be used in the calibration procedures of survey meters used in radiation protection measurements (scattered radiation), instruments used in direct beams (attenuated and non attenuated beams) and quality control instruments. The established qualities are recommended by the international standards IEC 1267 and ISO 4037-3. Two ionization chambers were used as reference systems, one with a volume of 30 cm{sup 3} for radiation protection measurements, and the other with a volume of 1 cm{sup 3} for direct beam measurements. Both are traceable to the German Primary Laboratory of Physikalisch-Technische Bundesanstalt (PTB). In the case of calibration of quality control instruments, a non-invasive method using the measurement of the spectrum endpoint was established with a portable gamma and X-ray Intertechnique spectrometer system. The methods were applied to survey meters (radiation protection measurements), ionization chambers (direct beam measurements) and k Vp meters (invasive and non-invasive instruments). (Author)

  14. Calibration methodology for instruments utilized in X radiation beams, diagnostic level

    International Nuclear Information System (INIS)

    Penha, M. da; Potiens, A.; Caldas, L.V.E.

    2004-01-01

    Methodologies for the calibration of diagnostic radiology instruments were established at the Calibration Laboratory of IPEN. The methods may be used in the calibration procedures of survey meters used in radiation protection measurements (scattered radiation), instruments used in direct beams (attenuated and non attenuated beams) and quality control instruments. The established qualities are recommended by the international standards IEC 1267 and ISO 4037-3. Two ionization chambers were used as reference systems, one with a volume of 30 cm 3 for radiation protection measurements, and the other with a volume of 1 cm 3 for direct beam measurements. Both are traceable to the German Primary Laboratory of Physikalisch-Technische Bundesanstalt (PTB). In the case of calibration of quality control instruments, a non-invasive method using the measurement of the spectrum endpoint was established with a portable gamma and X-ray Intertechnique spectrometer system. The methods were applied to survey meters (radiation protection measurements), ionization chambers (direct beam measurements) and k Vp meters (invasive and non-invasive instruments). (Author)

  15. A Performance Tuning Methodology with Compiler Support

    Directory of Open Access Journals (Sweden)

    Oscar Hernandez

    2008-01-01

    Full Text Available We have developed an environment, based upon robust, existing, open source software, for tuning applications written using MPI, OpenMP or both. The goal of this effort, which integrates the OpenUH compiler and several popular performance tools, is to increase user productivity by providing an automated, scalable performance measurement and optimization system. In this paper we describe our environment, show how these complementary tools can work together, and illustrate the synergies possible by exploiting their individual strengths and combined interactions. We also present a methodology for performance tuning that is enabled by this environment. One of the benefits of using compiler technology in this context is that it can direct the performance measurements to capture events at different levels of granularity and help assess their importance, which we have shown to significantly reduce the measurement overheads. The compiler can also help when attempting to understand the performance results: it can supply information on how a code was translated and whether optimizations were applied. Our methodology combines two performance views of the application to find bottlenecks. The first is a high level view that focuses on OpenMP/MPI performance problems such as synchronization cost and load imbalances; the second is a low level view that focuses on hardware counter analysis with derived metrics that assess the efficiency of the code. Our experiments have shown that our approach can significantly reduce overheads for both profiling and tracing to acceptable levels and limit the number of times the application needs to be run with selected hardware counters. In this paper, we demonstrate the workings of this methodology by illustrating its use with selected NAS Parallel Benchmarks and a cloud resolving code.

  16. Site calibration for the wind turbine performance evaluation

    International Nuclear Information System (INIS)

    Nam, Yoon Su; Yoo, Neung Soo; Lee, Jung Wan

    2004-01-01

    The accurate wind speed information at the hub height of a wind turbine is very essential to the exact estimation of the wind turbine power performance testing. Several method on the site calibration, which is a technique to estimate the wind speed at the wind turbine's hub height based on the measured wind data using a reference meteorological mast, are introduced. A site calibration result and the wind resource assessment for the TaeKwanRyung test site are presented using three-month wind data from a reference meteorological mast and the other mast temporarily installed at the site of wind turbine. Besides, an analysis on the uncertainty allocation for the wind speed correction using site calibration is performed

  17. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    DOE Order 5637.1, ''Classified Computer Security,'' requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, we have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system. 1 tab

  18. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    This paper reports on DIE Order 5637.1, Classified Computer Security, which requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, the authors have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system

  19. Design, Performance, and Calibration of CMS Hadron Endcap Calorimeters

    CERN Document Server

    Baiatian, G; Emeliantchik, Igor; Massolov, V; Shumeiko, Nikolai; Stefanovich, R; Damgov, Jordan; Dimitrov, Lubomir; Genchev, Vladimir; Piperov, Stefan; Vankov, Ivan; Litov, Leander; Bencze, Gyorgy; Laszlo, Andras; Pal, Andras; Vesztergombi, Gyorgy; Zálán, Peter; Fenyvesi, Andras; Bawa, Harinder Singh; Beri, Suman Bala; Bhatnagar, Vipin; Kaur, Manjit; Kohli, Jatinder Mohan; Kumar, Arun; Singh, Jas Bir; Acharya, Bannaje Sripathi; Banerjee, Sunanda; Banerjee, Sudeshna; Chendvankar, Sanjay; Dugad, Shashikant; Kalmani, Suresh Devendrappa; Katta, S; Mazumdar, Kajari; Mondal, Naba Kumar; Nagaraj, P; Patil, Mandakini Ravindra; Reddy, L; Satyanarayana, B; Sharma, Seema; Sudhakar, Katta; Verma, Piyush; Hashemi, Majid; Mohammadi-Najafabadi, M; Paktinat, S; Babich, Kanstantsin; Golutvin, Igor; Kalagin, Vladimir; Kamenev, Alexey; Konoplianikov, V; Kosarev, Ivan; Moissenz, K; Moissenz, P; Oleynik, Danila; Petrosian, A; Rogalev, Evgueni; Semenov, Roman; Sergeyev, S; Shmatov, Sergey; Smirnov, Vitaly; Vishnevskiy, Alexander; Volodko, Anton; Zarubin, Anatoli; Druzhkin, Dmitry; Ivanov, Alexander; Kudinov, Vladimir; Orlov, Alexandre; Smetannikov, Vladimir; Gavrilov, Vladimir; Gershtein, Yuri; Ilyina, N; Kaftanov, Vitali; Kisselevich, I; Kolossov, V; Krokhotin, Andrey; Kuleshov, Sergey; Litvintsev, Dmitri; Ulyanov, A; Safronov, Grigory; Semenov, Sergey; Stolin, Viatcheslav; Demianov, A; Gribushin, Andrey; Kodolova, Olga; Petrushanko, Sergey; Sarycheva, Ludmila; Teplov, V; Vardanyan, Irina; Yershov, A; Abramov, Victor; Goncharov, Petr; Kalinin, Alexey; Khmelnikov, Alexander; Korablev, Andrey; Korneev, Yury; Krinitsyn, Alexander; Kryshkin, V; Lukanin, Vladimir; Pikalov, Vladimir; Ryazanov, Anton; Talov, Vladimir; Turchanovich, L; Volkov, Alexey; Camporesi, Tiziano; de Visser, Theo; Vlassov, E; Aydin, Sezgin; Bakirci, Mustafa Numan; Cerci, Salim; Dumanoglu, Isa; Eskut, Eda; Kayis-Topaksu, A; Koylu, S; Kurt, Pelin; Onengüt, G; Ozkurt, Halil; Polatoz, A; Sogut, Kenan; Topakli, Huseyin; Vergili, Mehmet; Yetkin, Taylan; Cankoc, K; Esendemir, Akif; Gamsizkan, Halil; Güler, M; Ozkan, Cigdem; Sekmen, Sezen; Serin-Zeyrek, M; Sever, Ramazan; Yazgan, Efe; Zeyrek, Mehmet; Deliomeroglu, Mehmet; Gülmez, Erhan; Isiksal, Engin; Kaya, Mithat; Ozkorucuklu, Suat; Levchuk, Leonid; Sorokin, Pavel; Grynev, B; Lyubynskiy, Vadym; Senchyshyn, Vitaliy; Hauptman, John M; Abdullin, Salavat; Elias, John E; Elvira, D; Freeman, Jim; Green, Dan; Los, Serguei; ODell, V; Ronzhin, Anatoly; Suzuki, Ichiro; Vidal, Richard; Whitmore, Juliana; Arcidy, M; Hazen, Eric; Heering, Arjan Hendrix; Lawlor, C; Lazic, Dragoslav; Machado, Emanuel; Rohlf, James; Varela, F; Wu, Shouxiang; Baden, Drew; Bard, Robert; Eno, Sarah Catherine; Grassi, Tullio; Jarvis, Chad; Kellogg, Richard G; Kunori, Shuichi; Mans, Jeremy; Skuja, Andris; Podrasky, V; Sanzeni, Christopher; Winn, Dave; Akgun, Ugur; Ayan, S; Duru, Firdevs; Merlo, Jean-Pierre; Mestvirishvili, Alexi; Miller, Michael; Norbeck, Edwin; Olson, Jonathan; Onel, Yasar; Schmidt, Ianos; Akchurin, Nural; Carrell, Kenneth Wayne; Gusum, K; Kim, Heejong; Spezziga, Mario; Thomas, Ray; Wigmans, Richard; Baarmand, Marc M; Mermerkaya, Hamit; Ralich, Robert; Vodopiyanov, Igor; Kramer, Laird; Linn, Stephan; Markowitz, Pete; Cushman, Priscilla; Ma, Yousi; Sherwood, Brian; Cremaldi, Lucien Marcus; Reidy, Jim; Sanders, David A; Karmgard, Daniel John; Ruchti, Randy; Fisher, Wade Cameron; Tully, Christopher; Bodek, Arie; De Barbaro, Pawel; Budd, Howard; Chung, Yeon Sei; Haelen, T; Hagopian, Sharon; Hagopian, Vasken; Johnson, Kurtis F; Barnes, Virgil E; Laasanen, Alvin T

    2008-01-01

    Detailed measurements have been made with the CMS hadron calorimeter endcaps (HE) in response to beams of muons, electrons, and pions. Readout of HE with custom electronics and hybrid photodiodes (HPDs) shows no change of performance compared to readout with commercial electronics and photomultipliers. When combined with lead-tungstenate crystals, an energy resolution of 8\\% is achieved with 300 GeV/c pions. A laser calibration system is used to set the timing and monitor operation of the complete electronics chain. Data taken with radioactive sources in comparison with test beam pions provides an absolute initial calibration of HE to approximately 4\\% to 5\\%.

  20. Development of a calibration methodology and tests of kerma area product meters; Desenvolvimento de uma metodologia de calibracao e testes de medidores de produto Kerma-Area

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Nathalia Almeida

    2013-07-01

    The quantity kerma area product (PKA) is important to establish reference levels in diagnostic radiology exams. This quantity can be obtained using a PKA meter. The use of such meters is essential to evaluate the radiation dose in radiological procedures and is a good indicator to make sure that the dose limit to the patient's skin doesn't exceed. Sometimes, these meters come fixed to X radiation equipment, which makes its calibration difficult. In this work, it was developed a methodology for calibration of PKA meters. The instrument used for this purpose was the Patient Dose Calibrator (PDC). It was developed to be used as a reference to check the calibration of PKA and air kerma meters that are used for dosimetry in patients and to verify the consistency and behavior of systems of automatic exposure control. Because it is a new equipment, which, in Brazil, is not yet used as reference equipment for calibration, it was also performed the quality control of this equipment with characterization tests, the calibration and an evaluation of the energy dependence. After the tests, it was proved that the PDC can be used as a reference instrument and that the calibration must be performed in situ, so that the characteristics of each X-ray equipment, where the PKA meters are used, are considered. The calibration was then performed with portable PKA meters and in an interventional radiology equipment that has a PKA meter fixed. The results were good and it was proved the need for calibration of these meters and the importance of in situ calibration with a reference meter. (author)

  1. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    Science.gov (United States)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  2. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  3. PERFORMANCE ASSESSMENT AND GEOMETRIC CALIBRATION OF RESOURCESAT-2

    Directory of Open Access Journals (Sweden)

    P. V. Radhadevi

    2016-06-01

    Full Text Available Resourcesat-2 (RS-2 has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs. These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.

  4. Knowledge management performance methodology regarding manufacturing organizations

    Science.gov (United States)

    Istrate, C.; Herghiligiu, I. V.

    2016-08-01

    The current business situation is extremely complicated. Business must adapt to the changes in order (a) to survive on the increasingly dynamic markets, (b) to meet customers’ new request for complex, customized and innovative products. In modern manufacturing organizations it can be seen a substantial improvement regarding the management of knowledge. This occurs due to the fact that organizations realized that knowledge and an efficient management of knowledge generates the highest value. Even it could be said that the manufacturing organizations were and are the biggest beneficiary of KM science. Knowledge management performance (KMP) evaluation in manufacturing organizations can be considered as extremely important because without measuring it, they are unable to properly assess (a) what goals, targets and activities must have continuity, (b) what must be improved and (c) what must be completed. Therefore a proper KM will generate multiple competitive advantages for organizations. This paper presents a developed methodological framework regarding the KMP importance regarding manufacturing organizations. This methodological framework was developed using as research methods: bibliographical research and a panel of specialists. The purpose of this paper is to improve the evaluation process of KMP and to provide a viable tool for manufacturing organizations managers.

  5. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    Science.gov (United States)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  6. A methodology to calibrate water saturation estimated from 4D seismic data

    International Nuclear Information System (INIS)

    Davolio, Alessandra; Maschio, Célio; José Schiozer, Denis

    2014-01-01

    Time-lapse seismic data can be used to estimate saturation changes within a reservoir, which is valuable information for reservoir management as it plays an important role in updating reservoir simulation models. The process of updating reservoir properties, history matching, can incorporate estimated saturation changes qualitatively or quantitatively. For quantitative approaches, reliable information from 4D seismic data is important. This work proposes a methodology to calibrate the volume of water in the estimated saturation maps, as these maps can be wrongly estimated due to problems with seismic signals (such as noise, errors associated with data processing and resolution issues). The idea is to condition the 4D seismic data to known information provided by engineering, in this case the known amount of injected and produced water in the field. The application of the proposed methodology in an inversion process (previously published) that estimates saturation from 4D seismic data is presented, followed by a discussion concerning the use of such data in a history matching process. The methodology is applied to a synthetic dataset to validate the results, the main of which are: (1) reduction of the effects of noise and errors in the estimated saturation, yielding more reliable data to be used quantitatively or qualitatively and (2) an improvement in the properties update after using this data in a history matching procedure. (paper)

  7. Shuttle TPS thermal performance and analysis methodology

    Science.gov (United States)

    Neuenschwander, W. E.; Mcbride, D. U.; Armour, G. A.

    1983-01-01

    Thermal performance of the thermal protection system was approximately as predicted. The only extensive anomalies were filler bar scorching and over-predictions in the high Delta p gap heating regions of the orbiter. A technique to predict filler bar scorching has been developed that can aid in defining a solution. Improvement in high Delta p gap heating methodology is still under study. Minor anomalies were also examined for improvements in modeling techniques and prediction capabilities. These include improved definition of low Delta p gap heating, an analytical model for inner mode line convection heat transfer, better modeling of structure, and inclusion of sneak heating. The limited number of problems related to penetration items that presented themselves during orbital flight tests were resolved expeditiously, and designs were changed and proved successful within the time frame of that program.

  8. Online calibrations and performance of the ATLAS Pixel Detector

    CERN Document Server

    Keil, M; The ATLAS collaboration

    2010-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN. It consists of 1744 silicon sensors equipped with approximately 80 M electronic channels, providing typically three measurement points with high resolution for particles emerging from the beam-interaction region, thus allowing measuring particle tracks and secondary vertices with very high precision. The readout system of the Pixel Detector is based on a bi-directional optical data transmission system between the detector and the data acquisition system with an individual link for each of the 1744 modules. Signal conversion components are located on both ends, approximately 80 m apart. The talk will give an overview of the calibration and performance of both the detector and its optical readout. The most basic parameter to be tuned and calibrated for the detector electronics is the readout threshold of the individual pixel channels. These need to be carefully tuned to optimise position resolution a...

  9. Use of calibration methodology of gamma cameras for the workers surveillance using a thyroid simulator

    International Nuclear Information System (INIS)

    Alfaro, M.; Molina, G.; Vazquez, R.; Garcia, O.

    2010-09-01

    In Mexico there are a significant number of nuclear medicine centers in operation. For what the accidents risk related to the transport and manipulation of open sources used in nuclear medicine can exist. The National Institute of Nuclear Research (ININ) has as objective to establish a simple and feasible methodology for the workers surveillance related with the field of the nuclear medicine. This radiological surveillance can also be applied to the public in the event of a radiological accident. To achieve this it intends to use the available equipment s in the nuclear medicine centers, together with the neck-thyroid simulators elaborated by the ININ to calibrate the gamma cameras. The gamma cameras have among their component elements that conform spectrometric systems like the employees in the evaluation of the internal incorporation for direct measurements, reason why, besides their use for diagnostic for image, they can be calibrated with anthropomorphic simulators and also with punctual sources for the quantification of the radionuclides activity distributed homogeneously in the human body, or located in specific organs. Inside the project IAEA-ARCAL-RLA/9/049-LXXVIII -Procedures harmonization of internal dosimetry- where 9 countries intervened (Argentina, Brazil, Colombia, Cuba, Chile, Mexico, Peru, Uruguay and Spain). It was developed a protocol of cameras gamma calibration for the determination in vivo of radionuclides. The protocol is the base to establish and integrated network in Latin America to attend in response to emergencies, using nuclear medicine centers of public hospitals of the region. The objective is to achieve the appropriate radiological protection of the workers, essential for the sure and acceptable radiation use, the radioactive materials and the nuclear energy. (Author)

  10. Development of a calibration methodology for instruments used to interventional radiology quality control

    International Nuclear Information System (INIS)

    Miranda, Jurema Aparecida de

    2009-01-01

    Interventional radiology is the technique where X radiation images are used as a tool in the conduction of diagnostic or/and therapeutic procedures. The exposition times are long for both procedures, diagnostic and therapeutic, may cause serious injuries in the patient, and also contribute to the dose of the clinical staff. In Brazil there are not yet well established rules to determine the doses and to make the dosimetry in fluoroscopic beams. There is great interest in this study, in relation to the beam quality, the half-value-layer, and others parameters. In this work a Medicor Neo Diagnomax clinical X ray generator, fluoroscopy mode, was used to develop a calibration methodology for instruments used in interventional radiology quality control. One plane parallel ionization chamber PTW was used as monitor. The ionization chambers recommended for fluoroscopy measurements had been evaluated and calibrated in relation to the IPEN Calibration Laboratory reference ionization chamber. The RQR3, RQR5 and RQR7 radiation qualities and the specific ones for fluoroscopy, RQC3, RQC5 and RQC7, were established following the norm IEC 61267. All beams characteristics were determined. Ionization chambers positioning system and the acrylic phantoms to the entrance and exit doses determination were developed and constructed. The results obtained show air kerma rates of 4.5x10 -3 , 1.2x10 -2 and 1.9x10 -2 Gy/min for RQC3, RQC5 and RQC7 respectively. Tests with and without the collimation just after the monitor chamber, were carried out and the results showed a difference of +5.5%, +0.6% e + 0.8%, confirming the importance of the collimation use in these interventionist procedures. (author)

  11. Suomi-NPP VIIRS Day-Night Band On-Orbit Calibration and Performance

    Science.gov (United States)

    Chen, Hongda; Xiong, Xiaoxiong; Sun, Chengbo; Chen, Xuexia; Chiang, Kwofu

    2017-01-01

    The Suomi national polar-orbiting partnership Visible Infrared Imaging Radiometer Suite (VIIRS) instrument has successfully operated since its launch in October 2011. The VIIRS day-night band (DNB) is a panchromatic channel covering wavelengths from 0.5 to 0.9 microns that is capable of observing Earth scenes during both daytime and nighttime at a spatial resolution of 750 m. To cover the large dynamic range, the DNB operates at low-, middle-, and high-gain stages, and it uses an on-board solar diffuser (SD) for its low-gain stage calibration. The SD observations also provide a means to compute the gain ratios of low-to-middle and middle-to-high gain stages. This paper describes the DNB on-orbit calibration methodology used by the VIIRS characterization support team in supporting the NASA Earth science community with consistent VIIRS sensor data records made available by the land science investigator-led processing systems. It provides an assessment and update of the DNB on-orbit performance, including the SD degradation in the DNB spectral range, detector gain and gain ratio trending, and stray-light contamination and its correction. Also presented in this paper are performance validations based on Earth scenes and lunar observations, and comparisons to the calibration methodology used by the operational interface data processing segment.

  12. Performance and quality control of radionuclide calibrators in nuclear medicine

    International Nuclear Information System (INIS)

    Woods, M.J.; Baker, M.

    2002-01-01

    Full text: The use of ionising radiations in nuclear medicine has traditionally divided itself into two specific areas. The diagnostic usage has generally been dominated by the injection or ingestion of radionuclides. The therapeutic applications, on the other hand, have usually been accomplished by the application of ionising radiation, both from machines and radionuclide sources, whereby the radiation source is external to the patient. Over recent years, this divide has become increasingly blurred and the science between diagnosis and therapy has become significantly closer. This is particularly the situation in respect of the instruments used to determine the activity or dose delivered by the radiation source. In the ideal therapeutic situation, the radiation dose would be delivered solely to the malignant tissue. With external radiation therapy, this can never be achieved completely but this ideal can be approached more closely with targeted radiotherapy wherein radionuclides are introduced directly into the malignancy either as a solid, physical source or as a solution that, by its chemistry, concentrates into the area of interest. In order to achieve the optimum efficacy of treatment, there is an associated requirement to determine accurately the activity or dose rate of the radioactive source being used. It is here that the technology used in the diagnostic field can also be used to advantage for therapeutic applications. For diagnosis, the instrument of choice is the radionuclide calibrator and this equipment is increasingly finding parallel usage for the characterisation of therapeutic sources. Despite their appearances however, radionuclide calibrators are not 'black boxes' and need to be used with care, subjected to a robust level of quality control and operated by personnel who have a fundamental understanding of their operational characteristics. A measure of the level of performance of operational radionuclide calibrators and the competence of their

  13. Methodology for calibration of ionization chambers for X-ray of low energy in absorbed dose to water

    International Nuclear Information System (INIS)

    Oliveira, C.T.; Vivolo, V.; Potiens, M.P.A.

    2015-01-01

    The beams of low energy X-ray (10 to 150 kV) are used in several places in the world to treat a wide variety of surface disorders, and between these malignancies. As in Brazil, at this moment, there is no calibration laboratory providing the control service or calibration of parallel plate ionization chambers, the aim of this project was to establish a methodology for calibration of this kind of ionization chambers at low energy X-ray beams in terms of absorbed dose to water using simulators in the LCI. (author)

  14. On the development of a new methodology in sub-surface parameterisation on the calibration of groundwater models

    Science.gov (United States)

    Klaas, D. K. S. Y.; Imteaz, M. A.; Sudiayem, I.; Klaas, E. M. E.; Klaas, E. C. M.

    2017-10-01

    In groundwater modelling, robust parameterisation of sub-surface parameters is crucial towards obtaining an agreeable model performance. Pilot point is an alternative in parameterisation step to correctly configure the distribution of parameters into a model. However, the methodology given by the current studies are considered less practical to be applied on real catchment conditions. In this study, a practical approach of using geometric features of pilot point and distribution of hydraulic gradient over the catchment area is proposed to efficiently configure pilot point distribution in the calibration step of a groundwater model. A development of new pilot point distribution, Head Zonation-based (HZB) technique, which is based on the hydraulic gradient distribution of groundwater flow, is presented. Seven models of seven zone ratios (1, 5, 10, 15, 20, 25 and 30) using HZB technique were constructed on an eogenetic karst catchment in Rote Island, Indonesia and their performances were assessed. This study also concludes some insights into the trade-off between restricting and maximising the number of pilot points and offers a new methodology for selecting pilot point properties and distribution method in the development of a physically-based groundwater model.

  15. Prelaunch calibrations and on-orbit performance analysis of FY-2D SVISSR infrared channels

    Science.gov (United States)

    Zhang, Yong; Chen, Fuchun

    2014-10-01

    Meteorological satellites have become an irreplaceable weather and ocean-observing tool in China. These satellites are used to monitor natural disasters and improve the efficiency of many sectors of Chinese national economy. FY-2 series satellites are one of the key components of Chinese meteorological observing system and application system. In this paper, the operational satellite- FY-2D's infrared channels were focused and analyzed. The instruments' background was introduced briefly. The main payload SVISSR specifications were compared with its ancestral VISSR. The optical structure of the SVISSR was also expressed. FY-2D prelaunch calibrations methodology was introduced and the accuracies of the absolute radiometric calibration were analyzed. Some key optics on-orbit performance of FY-2D SVISSR were analyzed include onboard blackbody, cold FPA and detector noise level. All of these works show that FY- 2D's main payload SVISSR was in a healthy status.

  16. Calibration of in vitro bioassay methodology for determination “1”3”1I urine

    International Nuclear Information System (INIS)

    Carvalho, C.B.; Hazin, C.; Lima, A.R.

    2015-01-01

    The use of unsealed radioactive sources in institutions practicing nuclear medicine poses a significant risk of internal exposure of workers. In this context, handling of “1”3”1I plays an important role in relation to other radionuclides due to its wide application, particularly in medical diagnosis and therapy of diseases related to the thyroid gland. Given the increasing number of services using “1”3”1I in their examination protocols, the probability of accidental incorporation of this radionuclide has increased. The present study aimed to implement methodologies for in vitro bioassay at the Centro Regional de Ciências Nucleares do Nordeste (CRCN-NE/CNEN), Recife, Brazil, for internal monitoring of individuals occupationally exposed to “1”3”1I. For in vitro system calibration, a coaxial HPGe detector model GC1018 and a standard “1”3”3Ba source were used. Upon obtaining the calibration factor, it was possible to determine the minimum detectable activities (MDA) for the system by using direct measurements of distilled water simulating urine (in vitro). Then, by using the biokinetic models provided by the International Commission on Radiological Protection, edited with the AIDE software version 6.0, it was possible to estimate the Minimum Detectable Effective Dose (MDED). MDED values obtained were compared to the record level of 1 mSv recommended by the International Atomic Energy Agency in the urine compartment 24 h. The values found were lower than the record level of 1 mSv in all simulated incorporation scenarios: inhalation of vapor and particles with AMAD of 1 μ and 5 μ, type F compound, and ingestion. The results of this work show that the implemented technique is suitable for conducting internal monitoring of workers to “1”3”1I. It is intended to continue the work aiming the monitoring of occupationally exposed individuals from Nuclear Medicine Services in Recife, Brazil. (authors)

  17. A methodology for calibration of hyperspectral and multispectral satellite data in coastal areas

    Science.gov (United States)

    Pennucci, Giuliana; Fargion, Giulietta; Alvarez, Alberto; Trees, Charles; Arnone, Robert

    2012-06-01

    The objective of this work is to determine the location(s) in any given oceanic area during different temporal periods where in situ sampling for Calibration/Validation (Cal/Val) provides the best capability to retrieve accurate radiometric and derived product data (lowest uncertainties). We present a method to merge satellite imagery with in situ measurements, to determine the best in situ sampling strategy suitable for satellite Cal/Val and to evaluate the present in situ locations through uncertainty indices. This analysis is required to determine if the present in situ sites are adequate for assessing uncertainty and where additional sites and ship programs should be located to improve Calibration/Validation (Cal/Val) procedures. Our methodology uses satellite acquisitions to build a covariance matrix encoding the spatial-temporal variability of the area of interest. The covariance matrix is used in a Bayesian framework to merge satellite and in situ data providing a product with lower uncertainty. The best in situ location for Cal/Val is then identified by using a design principle (A-optimum design) that looks for minimizing the estimated variance of the merged products. Satellite products investigated in this study include Ocean Color water leaving radiance, chlorophyll, and inherent and apparent optical properties (retrieved from MODIS and VIIRS). In situ measurements are obtained from systems operated on fixed deployment platforms (e.g., sites of the Ocean Color component of the AErosol RObotic NETwork- AERONET-OC), moorings (e.g, Marine Optical Buoy-MOBY), ships or autonomous vehicles (such as Autonomous Underwater Vehicles and/or Gliders).

  18. Investigation of a Fiber Optic Strain Sensing (FOSS) Distributed Load Calibration Methodology

    Data.gov (United States)

    National Aeronautics and Space Administration — FOSS is a relatively newer technology that needs to be explored for application to load calibration and loads monitoring efforts. Load calibration opportunities are...

  19. Methodology for evaluation of diagnostic performance

    International Nuclear Information System (INIS)

    Metz, C.E.

    1992-01-01

    Effort in this project during the past year has focused on the development, refinement, and distribution of computer software that will allow current Receiver Operating Characteristic (ROC) methodology to be used conveniently and reliably by investigators in a variety of evaluation tasks in diagnostic medicine; and on the development of new ROC methodology that will broaden the spectrum of evaluation tasks and/or experimental settings to which the fundamental approach can be applied. Progress has been limited by the amount of financial support made available to the project

  20. Updates on the Performance and Calibration of HST/STIS

    Science.gov (United States)

    Lockwood, Sean A.; Monroe, TalaWanda R.; Ogaz, Sara; Branton, Doug; Carlberg, Joleen K.; Debes, John H.; Jedrzejewski, Robert I.; Proffitt, Charles R.; Riley, Allyssa; Sohn, Sangmo Tony; Sonnentrucker, Paule; Walborn, Nolan R.; Welty, Daniel

    2018-06-01

    The Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST) has been in orbit for 21 years and continues to produce high quality scientific results using a diverse complement of operating modes. These include spatially resolved spectroscopy in the UV and optical, high spatial resolution echelle spectroscopy in the UV, and solar-blind imaging in the UV. In addition, STIS possesses unique visible-light coronagraphic modes that keep the instrument at the forefront of exoplanet and debris-disk research. As the instrument's characteristics evolve over its lifetime, the instrument team at the Space Telescope Science Institute monitors its performance and works towards improving the quality of its data products. Here we present updates on the status of the STIS CCD and FUV & NUV MAMA detectors, as well as changes to the CalSTIS reduction pipeline. We also discuss progress toward the recalibration of the E140M/1425 echelle mode. The E140M grating blaze function shapes have changed since flux calibration was carried out following SM4, which limits the relative photometric flux accuracy of some spectral orders up to 5-10% at the edges. In Cycle 25 a special calibration program was executed to obtain updated sensitivity curves for the E140M/1425 setting.

  1. (Per)Forming Archival Research Methodologies

    Science.gov (United States)

    Gaillet, Lynee Lewis

    2012-01-01

    This article raises multiple issues associated with archival research methodologies and methods. Based on a survey of recent scholarship and interviews with experienced archival researchers, this overview of the current status of archival research both complicates traditional conceptions of archival investigation and encourages scholars to adopt…

  2. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    International Nuclear Information System (INIS)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran

    2007-11-01

    deeper MIKE SHE model down to less fractured bedrock, may also be interesting to evaluate. It is recommended that the observations above are further evaluated in connection with the next modelling phase for Laxemar during 2008. A sensitivity analysis has been made on calibration parameters. The most important results from the sensitivity analysis are the following: The hydraulic conductivity in the saturated zone proved to be more important than all of the tested vegetation and unsaturated zone parameters. The second most important parameters were the hydraulic conductivity of the unsaturated zone (K s ) and the specific yield (S y ). A lower hydraulic conductivity in the saturated zone increases the peak surface water flows, decreases the base flows, and increases the groundwater head amplitudes and the groundwater head elevations. A lower hydraulic conductivity in the unsaturated zone (K s ) increases the surface water flows, and, to some extent, decreases the groundwater head elevations. A lower specific yield in the unsaturated zone (S y ) increases the surface water flows (although with a smaller effect than K s ), increases the groundwater head amplitudes, and to some extent, increases the groundwater head elevations. A method for performing the calibrations of future models is also presented based on the results from the base case simulations and the sensitivity analysis

  3. Online Calibration and Performance of the ATLAS Pixel Detector

    CERN Document Server

    Keil, M

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN. It consists of 1744 silicon sensors equipped with approximately 80 million electronic channels, providing typically three measurement points with high resolution for particles emerging from the beam-interaction region, thus allowing measuring particle tracks and secondary vertices with very high precision. The readout system of the Pixel Detector is based on a bi-directional optical data transmission system between the detector and the data acquisition system with an individual link for each of the 1744 modules. Signal conversion components are located on both ends, approximately 80 m apart. This paper describes the tuning and calibration of the optical links and the detector modules, including measurements of threshold, noise, charge measurement, timing performance and the sensor leakage current.

  4. A theoretical and experimental investigation of propeller performance methodologies

    Science.gov (United States)

    Korkan, K. D.; Gregorek, G. M.; Mikkelson, D. C.

    1980-01-01

    This paper briefly covers aspects related to propeller performance by means of a review of propeller methodologies; presentation of wind tunnel propeller performance data taken in the NASA Lewis Research Center 10 x 10 wind tunnel; discussion of the predominent limitations of existing propeller performance methodologies; and a brief review of airfoil developments appropriate for propeller applications.

  5. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    International Nuclear Information System (INIS)

    Aneljung, Maria; Gustafsson, Lars-Goeran

    2007-04-01

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have large effects on water

  6. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Gustafsson, Lars-Goeran [DHI Water and Environment AB, Goeteborg (Sweden)

    2007-04-15

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have

  7. Calibration requirements and methodology for remote sensors viewing the ocean in the visible

    Science.gov (United States)

    Gordon, Howard R.

    1987-01-01

    The calibration requirements for ocean-viewing sensors are outlined, and the present methods of effecting such calibration are described in detail. For future instruments it is suggested that provision be made for the sensor to view solar irradiance in diffuse reflection and that the moon be used as a source of diffuse light for monitoring the sensor stability.

  8. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    International Nuclear Information System (INIS)

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-01-01

    would be reduced considerably. The authors note that the PMI is primarily intended to provide a high-level, quantitative description of year-to-year (or version-to-version) improvements in numerical models, where these descriptions can be used as a means of justifying funding requests to support further model development research. It is in this context that the present report should be considered: the availability of data from experimental tests should be viewed as a time-dependent variable, where experiments are added to the calibration suite as resources become available. For the present report, the experimental data is of course already available (permitting demonstration of the proposed methodology). Furthermore, the authors are not proposing this methodology as the answer to the question of how to allocate resources for experimental tests, and readers are directed to (5) and the references contained in Section 1 of (5) for additional information on the subject. However, the strength of this methodology is that it offers a means by which to select the sequence of experiments in a pre-arranged experimental campaign (a situation for which the methods discussed in (5) are less appropriate). The report is organized as follows. Section 2 describes the methodology employed to formulate the sequences of experiments for the calibrations performed for this study. Section 3 then presents the results associated with two sequences; supplementary results are provided in the Appendix. The report then concludes in Section 4 with a brief summary.

  9. Modelling of thermal hydraulics in a KAROLINA calorimeter for its calibration methodology validation

    Directory of Open Access Journals (Sweden)

    Luks Aleksandra

    2016-12-01

    Full Text Available Results of numerical calculations of heat exchange in a nuclear heating detector for nuclear reactors are presented in this paper. The gamma radiation is generated in nuclear reactor during fission and radiative capture reactions as well as radioactive decay of its products. A single-cell calorimeter has been designed for application in the MARIA research reactor in the National Centre for Nuclear Research (NCBJ in Świerk near Warsaw, Poland, and can also be used in the Jules Horowitz Reactor (JHR, which is under construction in the research centre in Cadarache, France. It consists of a cylindrical sample, which is surrounded by a gas layer, contained in a cylindrical housing. Additional calculations had to be performed before its insertion into the reactor. Within this analysis, modern computational fluid dynamics (CFD methods have been used for assessing important parameters, for example, mean surface temperature, mean volume temperature, and maximum sample (calorimeter core temperature. Results of an experiment performed at a dedicated out-of-pile calibration bench and results of numerical modelling validation are also included in this paper.

  10. Calibration between Undergraduate Students' Prediction of and Actual Performance: The Role of Gender and Performance Attributions

    Science.gov (United States)

    Gutierrez, Antonio P.; Price, Addison F.

    2017-01-01

    This study investigated changes in male and female students' prediction and postdiction calibration accuracy and bias scores, and the predictive effects of explanatory styles on these variables beyond gender. Seventy undergraduate students rated their confidence in performance before and after a 40-item exam. There was an improvement in students'…

  11. A performance assessment methodology for low-level waste facilities

    International Nuclear Information System (INIS)

    Kozak, M.W.; Chu, M.S.Y.; Mattingly, P.A.

    1990-07-01

    A performance assessment methodology has been developed for use by the US Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. This report provides a summary of background reports on the development of the methodology and an overview of the models and codes selected for the methodology. The overview includes discussions of the philosophy and structure of the methodology and a sequential procedure for applying the methodology. Discussions are provided of models and associated assumptions that are appropriate for each phase of the methodology, the goals of each phase, data required to implement the models, significant sources of uncertainty associated with each phase, and the computer codes used to implement the appropriate models. In addition, a sample demonstration of the methodology is presented for a simple conceptual model. 64 refs., 12 figs., 15 tabs

  12. Amtrak performance tracking (APT) system : methodology summary

    Science.gov (United States)

    2017-09-15

    The Volpe Center collaborated with Amtrak and the Federal Railroad Administration (FRA) to develop a cost accounting system named Amtrak Performance Tracking (APT) used by Amtrak to manage, allocate, and report its costs. APTs initial development ...

  13. Calibration methodology application of kerma area product meters in situ: Preliminary results

    Science.gov (United States)

    Costa, N. A.; Potiens, M. P. A.

    2014-11-01

    The kerma-area product (KAP) is a useful quantity to establish the reference levels of conventional X-ray examinations. It can be obtained by measurements carried out with a KAP meter on a plane parallel transmission ionization chamber mounted on the X-ray system. A KAP meter can be calibrated in laboratory or in situ, where it is used. It is important to use one reference KAP meter in order to obtain reliable quantity of doses on the patient. The Patient Dose Calibrator (PDC) is a new equipment from Radcal that measures KAP. It was manufactured following the IEC 60580 recommendations, an international standard for KAP meters. This study had the aim to calibrate KAP meters using the PDC in situ. Previous studies and the quality control program of the PDC have shown that it has good function in characterization tests of dosimeters with ionization chamber and it also has low energy dependence. Three types of KAP meters were calibrated in four different diagnostic X-ray equipments. The voltages used in the two first calibrations were 50 kV, 70 kV, 100 kV and 120 kV. The other two used 50 kV, 70 kV and 90 kV. This was related to the equipments limitations. The field sizes used for the calibration were 10 cm, 20 cm and 30 cm. The calibrations were done in three different cities with the purpose to analyze the reproducibility of the PDC. The results gave the calibration coefficient for each KAP meter and showed that the PDC can be used as a reference instrument to calibrate clinical KAP meters.

  14. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  15. On the performance of a calibrated nanoparticle generator

    International Nuclear Information System (INIS)

    Backman, Ulrika; Lyyraenen, Jussi; Tapper, Unto; Auvinen, Ari; Jokiniemi, Jorma

    2009-01-01

    There is a need for nanoparticle generators with well characterised properties in many fields. For instance the calibration of measurement instruments can be done in place and the downtime for the instrument hence decreased. Also in nanoparticle toxicity experiments it is very important to have a well characterised particle source. The aim of this study was to develop a calibrated nanoparticle generator with stable particle production. The number concentration should be regulated over many orders of magnitude and the particle size should also be adjustable. In this paper the design of the nanoparticle generator and the properties of the produced nanoparticles at one furnace temperature are presented.

  16. Calibration and performance measurements for the nasa deep space network aperture enhancement project (daep)

    Science.gov (United States)

    LaBelle, Remi C.; Rochblatt, David J.

    2018-06-01

    The NASA Deep Space Network (DSN) has recently constructed two new 34-m antennas at the Canberra Deep Space Communications Complex (CDSCC). These new antennas are part of the larger DAEP project to add six new 34-m antennas to the DSN, including two in Madrid, three in Canberra and one in Goldstone (California). The DAEP project included development and implementation of several new technologies for the X, and Ka (32 GHz) -band uplink and downlink electronics. The electronics upgrades were driven by several different considerations, including parts obsolescence, cost reduction, improved reliability and maintainability, and capability to meet future performance requirements. The new antennas are required to support TT&C links for all of the NASA deep-space spacecraft, as well as for several international partners. Some of these missions, such as Voyager 1 and 2, have very limited link budgets, which results in demanding requirements for system G/T performance. These antennas are also required to support radio science missions with several spacecraft, which dictate some demanding requirements for spectral purity, amplitude stability and phase stability for both the uplink and downlink electronics. After completion of these upgrades, a comprehensive campaign of tests and measurements took place to characterize the electronics and calibrate the antennas. Radiometric measurement techniques were applied to characterize, calibrate, and optimize the performance of the antenna parameters. These included optical and RF high-resolution holographic and total power radiometry techniques. The methodology and techniques utilized for the measurement and calibration of the antennas is described in this paper. Lessons learned (not all discussed in this paper) from the commissioning of the first antenna (DSS-35) were applied to the commissioning of the second antenna (DSS-36). These resulted in achieving antenna aperture efficiency of 66% (for DSS-36), at Ka-Band (32-Ghz), which is

  17. Methodology for the development and calibration of the SCI-QOL item banks.

    Science.gov (United States)

    Tulsky, David S; Kisala, Pamela A; Victorson, David; Choi, Seung W; Gershon, Richard; Heinemann, Allen W; Cella, David

    2015-05-01

    To develop a comprehensive, psychometrically sound, and conceptually grounded patient reported outcomes (PRO) measurement system for individuals with spinal cord injury (SCI). Individual interviews (n=44) and focus groups (n=65 individuals with SCI and n=42 SCI clinicians) were used to select key domains for inclusion and to develop PRO items. Verbatim items from other cutting-edge measurement systems (i.e. PROMIS, Neuro-QOL) were included to facilitate linkage and cross-population comparison. Items were field tested in a large sample of individuals with traumatic SCI (n=877). Dimensionality was assessed with confirmatory factor analysis. Local item dependence and differential item functioning were assessed, and items were calibrated using the item response theory (IRT) graded response model. Finally, computer adaptive tests (CATs) and short forms were administered in a new sample (n=245) to assess test-retest reliability and stability. A calibration sample of 877 individuals with traumatic SCI across five SCI Model Systems sites and one Department of Veterans Affairs medical center completed SCI-QOL items in interview format. We developed 14 unidimensional calibrated item banks and 3 calibrated scales across physical, emotional, and social health domains. When combined with the five Spinal Cord Injury--Functional Index physical function banks, the final SCI-QOL system consists of 22 IRT-calibrated item banks/scales. Item banks may be administered as CATs or short forms. Scales may be administered in a fixed-length format only. The SCI-QOL measurement system provides SCI researchers and clinicians with a comprehensive, relevant and psychometrically robust system for measurement of physical-medical, physical-functional, emotional, and social outcomes. All SCI-QOL instruments are freely available on Assessment CenterSM.

  18. Effect of calibration data series length on performance and optimal parameters of hydrological model

    Directory of Open Access Journals (Sweden)

    Chuan-zhe Li

    2010-12-01

    Full Text Available In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments, we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.

  19. Iowa calibration of MEPDG performance prediction models : [summary].

    Science.gov (United States)

    2013-06-01

    The latest AASHTOWare DARWin-METM (now referred to as Pavement ME Design), and the Mechanistic-Empirical Pavement Design Guide (MEPDG) (AASHTO 2008) are significantly improved methodologies for the analysis and design of pavement structures. DARWin...

  20. Calibration and reconstruction performances of the KLOE electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Adinolfi, M.; Aloisio, A.; Ambrosino, F.; Andryakov, A.; Antonelli, A.; Antonelli, M.; Anulli, F.; Bacci, C.; Bankamp, A.; Barbiellini, G.; Bellini, F.; Bencivenni, G.; Bertolucci, S.; Bini, C.; Bloise, C.; Bocci, V.; Bossi, F.; Branchini, P.; Bulychjov, S.A.; Cabibbo, G.; Calcaterra, A.; Caloi, R.; Campana, P.; Capon, G.; Carboni, G.; Cardini, A.; Casarsa, M.; Cataldi, G.; Ceradini, F.; Cervelli, F.; Cevenini, F.; Chiefari, G.; Ciambrone, P.; Conetti, S.; Conticelli, S.; Lucia, E. De; Robertis, G. De; Sangro, R. De; Simone, P. De; Zorzi, G. De; Dell'Agnello, S.; Denig, A.; Domenico, A. Di; Donato, C. Di; Falco, S. Di; Doria, A.; Drago, E.; Elia, V.; Erriquez, O.; Farilla, A.; Felici, G.; Ferrari, A.; Ferrer, M.L.; Finocchiaro, G.; Forti, C.; Franceschi, A.; Franzini, P.; Gao, M.L.; Gatti, C.; Gauzzi, P.; Giovannella, S.; Golovatyuk, V.; Gorini, E.; Grancagnolo, F.; Grandegger, W.; Graziani, E.; Guarnaccia, P.; Hagel, U.V.; Han, H.G.; Han, S.W.; Huang, X.; Incagli, M.; Ingrosso, L.; Jang, Y.Y.; Kim, W.; Kluge, W.; Kulikov, V.; Lacava, F.; Lanfranchi, G.; Lee-Franzini, J.; Lomtadze, F.; Luisi, C.; Mao, C.S.; Martemianov, M.; Matsyuk, M.; Mei, W.; Merola, L.; Messi, R.; Miscetti, S.; Moalem, A.; Moccia, S.; Moulson, M.; Mueller, S.; Murtas, F.; Napolitano, M.; Nedosekin, A.; Panareo, M.; Pacciani, L.; Pages, P.; Palutan, M.; Paoluzi, L.; Pasqualucci, E.; Passalacqua, L.; Passaseo, M.; Passeri, A.; Patera, V.; Petrolo, E.; Petrucci, G.; Picca, D.; Pirozzi, G.; Pistillo, C.; Pollack, M.; Pontecorvo, L.; Primavera, M.; Ruggieri, F.; Santangelo, P.; Santovetti, E.; Saracino, G.; Schamberger, R.D.; Schwick, C.; Sciascia, B.; Pirozzi, G.; Sciubba, A.; Scuri, F.; Sfiligoi, I.; Shan, J.; Silano, P.; Spadaro, T.; Spagnolo, S.; Spiriti, E.; Stanescu, C.; Tong, G.L.; Tortora, L.; Valente, E.; Valente, P.; Valeriani, B.; Venanzoni, G.; Veneziano, S.; Wu, Y.; Xie, Y.G.; Zhao, P.P.; Zhou, Y.

    2001-01-01

    The main aim of the KLOE experiment at DAPHINE, the Frascati phi-factory, is to study CP violation in the K 0 -K-bar 0 system. Requirements on shower detection are very stringent. An hermetic, lead-scintillating fiber sampling calorimeter has been chosen and built. A review of the methods used to calibrate and reconstruct energy and timing is reported in this paper. Emphasis is given to the calibration procedures developed using the 2.4 pb -1 collected in 1999. An energy resolution of 5.7% E/GeV is achieved together with a linearity in energy response better than 1% above 50 MeV. A time resolution of ∼54 ps E/GeV is also measured on samples of radiative Bhabha and PHI decays

  1. Methodology for performing measurements to release material from radiological control

    International Nuclear Information System (INIS)

    Durham, J.S.; Gardner, D.L.

    1993-09-01

    This report describes the existing and proposed methodologies for performing measurements of contamination prior to releasing material for uncontrolled use at the Hanford Site. The technical basis for the proposed methodology, a modification to the existing contamination survey protocol, is also described. The modified methodology, which includes a large-area swipe followed by a statistical survey, can be used to survey material that is unlikely to be contaminated for release to controlled and uncontrolled areas. The material evaluation procedure that is used to determine the likelihood of contamination is also described

  2. Astrometric Calibration and Performance of the Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Bernstein, G. M.; Armstrong, R.; Plazas, A. A.; Walker, A. R.; Abbott, T. M. C.; Allam, S.; Bechtol, K.; Benoit-Lévy, A.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Cunha, C. E.; Costa, L. N. da; DePoy, D. L.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Fernandez, E.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Krause, E.; Kuehn, K.; Kuropatkin, N.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Menanteau, F.; Miquel, R.; Ogando, R. L. C.; Reil, K.; Roodman, A.; Rykoff, E. S.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.

    2017-05-30

    We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520~Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of $>10^7$ stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the RMS variation in aperture magnitudes of bright stars on cloudless nights down to 2--3 mmag, with <1 mmag of correlated photometric errors for stars separated by $\\ge20$". On cloudless nights, any departures of the exposure zeropoints from a secant airmass law exceeding >1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6" and 8" diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2-degree field drifts over months by up to $\\pm7$ mmag, in a nearly-wavelength-independent low-order pattern. We find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.

  3. Empirical component model to predict the overall performance of heating coils: Calibrations and tests based on manufacturer catalogue data

    International Nuclear Information System (INIS)

    Ruivo, Celestino R.; Angrisani, Giovanni

    2015-01-01

    Highlights: • An empirical model for predicting the performance of heating coils is presented. • Low and high heating capacity cases are used for calibration. • Versions based on several effectiveness correlations are tested. • Catalogue data are considered in approach testing. • The approach is a suitable component model to be used in dynamic simulation tools. - Abstract: A simplified methodology for predicting the overall behaviour of heating coils is presented in this paper. The coil performance is predicted by the ε-NTU method. Usually manufacturers do not provide information about the overall thermal resistance or the geometric details that are required either for the device selection or to apply known empirical correlations for the estimation of the involved thermal resistances. In the present work, heating capacity tables from the manufacturer catalogue are used to calibrate simplified approaches based on the classical theory of heat exchangers, namely the effectiveness method. Only two reference operating cases are required to calibrate each approach. The validity of the simplified approaches is investigated for a relatively high number of operating cases, listed in the technical catalogue of a manufacturer. Four types of coils of three sizes of air handling units are considered. A comparison is conducted between the heating coil capacities provided by the methodology and the values given by the manufacturer catalogue. The results show that several of the proposed approaches are suitable component models to be integrated in dynamic simulation tools of air conditioning systems such as TRNSYS or EnergyPlus

  4. Camera calibration in a hazardous environment performed in situ with automated analysis and verification

    International Nuclear Information System (INIS)

    DePiero, F.W.; Kress, R.L.

    1993-01-01

    Camera calibration using the method of Two Planes is discussed. An implementation of the technique is described that may be performed in situ, e.g., in a hazardous or contaminated environment, thus eliminating the need for decontamination of camera systems before recalibration. Companion analysis techniques used for verifying the correctness of the calibration are presented

  5. Calibration and performance of a stirred benthic chamber

    Science.gov (United States)

    Buchholtz-ten Brink, M. R.; Gust, G.; Chavis, D.

    1989-07-01

    The physical and chemical boundary layer parameters characteristic for a benthic chamber were cross-calibrated by the use of two methods in the laboratory: (1) flush-mounted hot-film sensors, which measure the friction velocity u ∗, and (2) the alabaster dissolution technique, which measures the equivalent film thickness z. Tests of five stirring devices were made, using both techniques, to improve the stirring mechanism in the MANOP Lander flux chambers. The stirring device that was finally implemented consisted of four rods and produced spatially averaged friction velocities u ∗ ranging from 0.1 to 0.5 cm s -1 (i.e. mean film thickness z from 500 to 180 μm) when running at speeds from 3 to 9 rpm. The friction velocity field at the sediment surface is related to the rpm of the stirring device and the penetration depth of the chamber into the sediments; combinations of both can create z and u ∗ inside the chamber that duplicate those of many natural environments. The log-log calibration relationship found between u ∗ and transfer coefficients K' also provides a means to predict the mass-transfer resistance of solutes at the sediment-water interface from measurements of mean bottom stress.

  6. Technological considerations in emergency instrumentation preparedness. Phase II-D. Evaluation testing and calibration methodology for emergency radiological instrumentation

    International Nuclear Information System (INIS)

    Bramson, P.E.; Andersen, B.V.; Fleming, D.M.; Kathren, R.L.; Mulhern, O.R.; Newton, C.E.; Oscarson, E.E.; Selby, J.M.

    1976-09-01

    In response to recommendations from the Advisory Committee on Reactor Safeguards, the Division of Operational Safety, U.S. ERDA has contracted with Battelle, Pacific Northwest Laboratories to survey the adequacy of existing instrumentation at nuclear fuel cycle facilities to meet emergency requirements and to develop technical criteria for instrumentation systems to be used in assessment of environmental conditions following plant emergencies. This report, the fifth in a series, provides: (1) calibration methods to assure the quality of radiological measurements and (2) testing procedures for determining whether an emergency radiological instrument meets the performance specifications. Three previous reports in this series identified the emergency instrumentation needs for power reactors, mixed oxide fuel plants, and fuel reprocessing facilities. Each of these three reports contains a Section VI, which sets forth applicable radiological instrument performance criteria and calibration requirements. Testing and calibration procedures in this report have been formatted in two parts: IV and V, each divided into three subsections: (1) Power Reactors, (2) Mixed Oxide Fuel Plants, and (3) Fuel Reprocessing Facilities. The three performance criteria subsections directly coincide with the performance criteria sections of the previous reports. These performance criteria sections have been reproduced in this report as Part III with references of ''required action'' added

  7. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran (DHI Sverige AB, Lilla Bommen 1, SE-411 04 Goeteborg (Sweden))

    2007-11-15

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  8. Design, Performance and Calibration of the CMS Forward Calorimeter Wedges

    CERN Document Server

    Baiatian, G; Emeliantchik, Igor; Massolov, V; Shumeiko, Nikolai; Stefanovich, R; Damgov, Jordan; Dimitrov, Lubomir; Genchev, Vladimir; Piperov, Stefan; Vankov, Ivan; Litov, Leander; Bencze, Gyorgy; Laszlo, Andras; Pal, Andras; Vesztergombi, Gyorgy; Zálán, Peter; Fenyvesi, Andras; Bawa, Harinder Singh; Beri, Suman Bala; Bhatnager, V; Kaur, Manjit; Kumar, Arun; Kohli, Jatinder Mohan; Singh, Jas Bir; Acharya, Bannaje Sripathi; Chendvankar, Sanjay; Dugad, Shashikant; Kalmani, Suresh Devendrappa; Katta, S; Mazumdar, Kajari; Mondal, Naba Kumar; Nagaraj, P; Patil, Mandakini Ravindra; Reddy, L V; Satyanarayana, B; Sharma, Seema; Verma, Piyush; Hashemi, Majid; Mohammadi-Najafabadi, M; Paktinat, S; Babich, Kanstantsin; Golutvin, Igor; Kalagin, Vladimir; Kosarev, Ivan; Ladygin, Vladimir; Meshcheryakov, Gleb; Moissenz, P; Petrosian, A; Rogalev, Evgueni; Sergeyev, S; Smirnov, Vitaly; Vishnevski, A V; Volodko, Anton; Zarubin, Anatoli; Gavrilov, Vladimir; Gershtein, Yuri; Ilyina, N P; Kaftanov, Vitali; Kisselevich, I; Kolossov, V; Krokhotin, Andrey; Kuleshov, Sergey; Litvintsev, Dmitri; Oulyanov, A; Safronov, S; Semenov, Sergey; Stolin, Viatcheslav; Gribushin, Andrey; Demianov, A; Kodolova, Olga; Petrushanko, Sergey; Sarycheva, Ludmila; Teplov, Konstantin; Vardanyan, Irina; Yershov, A A; Abramov, Victor; Goncharov, Petr; Kalinin, Alexey; Korablev, Andrey; Khmelnikov, V A; Korneev, Yury; Krinitsyn, Alexander; Kryshkin, V; Lukanin, Vladimir; Pikalov, Vladimir; Ryazanov, Anton; Talov, Vladimir; Turchanovich, L K; Volkov, Alexey; Camporesi, Tiziano; De Visser, Theo; Vlassov, E; Aydin, Sezgin; Bakirci, Mustafa Numan; Cerci, Salim; Dumanoglu, Isa; Eskut, Eda; Kayis-Topaksu, A; Koylu, S; Kurt, Pelin; Kuzucu, A; Onengüt, G; Ozdes-Koca, N; Ozkurt, Halil; Sogut, Kenan; Topakli, Huseyin; Vergili, Mehmet; Yetkin, Taylan; Cankocak, Kerem; Gamsizkan, Halil; Ozkan, Cigdem; Sekmen, Sezen; Serin-Zeyrek, M; Sever, Ramazan; Yazgan, Efe; Zeyrek, Mehmet; Deliomeroglu, Mehmet; Dindar, Kamile; Gülmez, Erhan; Isiksal, Engin; Kaya, Mithat; Ozkorucuklu, Suat; Levchuk, Leonid; Sorokin, Pavel; Grinev, B; Lubinsky, V; Senchyshyn, Vitaliy; Anderson, E Walter; Hauptman, John M; Elias, John E; Freeman, Jim; Green, Dan; Heering, Arjan Hendrix; Lazic, Dragoslav; Los, Serguei; Ronzhin, Anatoly; Suzuki, Ichiro; Vidal, Richard; Whitmore, Juliana; Antchev, Georgy; Arcidy, M; Hazen, Eric; Lawlor, C; Machado, Emanuel; Posch, C; Rohlf, James; Sulak, Lawrence; Varela, F; Wu, Shouxiang; Adams, Mark Raymond; Burchesky, Kyle; Qiang, W; Abdullin, Salavat; Baden, Drew; Bard, Robert; Eno, Sarah Catherine; Grassi, Tullio; Jarvis, Chad; Kellogg, Richard G; Kunori, Shuichi; Mans, Jeremy; Skuja, Andris; Wang, Lei; Wetstein, Matthew; Ayan, S; Akgun, Ugur; Duru, Firdevs; Merlo, Jean-Pierre; Mestvirishvili, Alexi; Miller, Michael; Norbeck, Edwin; Olson, Jonathan; Onel, Yasar; Schmidt, Ianos; Akchurin, Nural; Carrell, Kenneth Wayne; Gumus, Kazim; Kim, Heejong; Spezziga, Mario; Thomas, Ray; Wigmans, Richard; Baarmand, Marc M; Mermerkaya, Hamit; Vodopyanov, I; Kramer, Laird; Linn, Stephan; Markowitz, Pete; Martínez, German; Cushman, Priscilla; Ma, Yousi; Sherwood, Brian; Cremaldi, Lucien Marcus; Reidy, Jim; Sanders, David A; Fisher, Wade Cameron; Tully, Christopher; Hagopian, Sharon; Hagopian, Vasken; Johnson, Kurtis F; Barnes, Virgil E; Laasanen, Alvin T; Pompos, Arnold

    2008-01-01

    We report on the test beam results and calibration methods using charged particles of the CMS Forward Calorimeter (HF). The HF calorimeter covers a large pseudorapidity region (3\\l |\\eta| \\le 5), and is essential for large number of physics channels with missing transverse energy. It is also expected to play a prominent role in the measurement of forward tagging jets in weak boson fusion channels. The HF calorimeter is based on steel absorber with embedded fused-silica-core optical fibers where Cherenkov radiation forms the basis of signal generation. Thus, the detector is essentially sensitive only to the electromagnetic shower core and is highly non-compensating (e/h \\approx 5). This feature is also manifest in narrow and relatively short showers compared to similar calorimeters based on ionization. The choice of fused-silica optical fibers as active material is dictated by its exceptional radiation hardness. The electromagnetic energy resolution is dominated by photoelectron statistics and can be expressed...

  9. Design methodology to enhance high impedance surfaces performances

    Directory of Open Access Journals (Sweden)

    M. Grelier

    2014-04-01

    Full Text Available A methodology is introduced for designing wideband, compact and ultra-thin high impedance surfaces (HIS. A parametric study is carried out to examine the effect of the periodicity on the electromagnetic properties of an HIS. This approach allows designers to reach the best trade-off for HIS performances.

  10. Human Schedule Performance, Protocol Analysis, and the "Silent Dog" Methodology

    Science.gov (United States)

    Cabello, Francisco; Luciano, Carmen; Gomez, Inmaculada; Barnes-Holmes, Dermot

    2004-01-01

    The purpose of the current experiment was to investigate the role of private verbal behavior on the operant performances of human adults, using a protocol analysis procedure with additional methodological controls (the "silent dog" method). Twelve subjects were exposed to fixed ratio 8 and differential reinforcement of low rate 3-s schedules. For…

  11. Development of a Pattern Recognition Methodology for Determining Operationally Optimal Heat Balance Instrumentation Calibration Schedules

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Beran; John Christenson; Dragos Nica; Kenny Gross

    2002-12-15

    The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.

  12. Study of calibration equations of 137Cs methodology for soil erosion determination

    International Nuclear Information System (INIS)

    Santos, Elias Antunes dos

    2001-02-01

    Using the method of 137 Cs and gamma-ray spectrometry, soil samples of two plots erosion were studied at Londrina city. the soil class studied was a dystrophic dark red soil (LRd), with erosion indexes measured by Agronomic Institute of Parana State (IAPAR) using a conventional method, since 1976. Through the percentage reduction of 137 Cs related to the reference site, the soil losses were calculated using the proportional, mass balance and profile distribution models. Making the correlation between the 137 Cs concentrations and the erosion measured by IAPAR, two calibration equations were obtained and applied to the data set measured in the basin of the Unda river and compared to those models in the literature. As reference region, was chosen a natural forest located close to the plots. The average inventory of 137 Cs was 555± 16 Bq.m -2 . The inventories of the erosion plots varied from 112 to 136 Bq.m -2 for samples collected until 30 cm depth. The erosion rates estimated by the models varied from 64 to 85 ton.ha -1 .yr -1 for the proportional and profile distribution models, respectively, and 137 to 165 ton.ha -1 for the mass balance model, while the measured erosion obtained by IAPAR was 86 ton.ha -1 .yr -1 . From the two calibration equations obtained, the one that take into account the 137 Cs distribution with the soil profile was that showed the best consistence with the erosion rated for the basin of the Unda river (same soil class) in the range from 4 to 48 ton.ha -1 .yr -1 , while the proportional and profile distribution models applied rates from 7 to 45 ton.ha -1 .yr -1 and 6 to 69 ton.ha -1 .yr -1 , respectively. (author)

  13. Calibration and Performance Testing of Sodium Iodide, NaI (Tl)

    African Journals Online (AJOL)

    Administrator

    performed by the well-established method of ... gamma-rays emitted from radionuclides in a ... 1 Radiation Protection Institute, Ghana Atomic Energy Commission, P. O. Box ... validation of the calibration showed that there were no significance ...

  14. Optimizing the accuracy of a helical diode array dosimeter: A comprehensive calibration methodology coupled with a novel virtual inclinometer

    Energy Technology Data Exchange (ETDEWEB)

    Kozelka, Jakub; Robinson, Joshua; Nelms, Benjamin; Zhang, Geoffrey; Savitskij, Dennis; Feygelman, Vladimir [Sun Nuclear Corp., Melbourne, Florida 32940 (United States); Department of Physics, University of South Florida, Tampa, Florida 33612 (United States); Canis Lupus LLC, Sauk County, Wisconsin 53561 (United States); Division of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States); Sun Nuclear Corp., Melbourne, Florida 32940 (United States); Division of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2011-09-15

    Purpose: The goal of any dosimeter is to be as accurate as possible when measuring absolute dose to compare with calculated dose. This limits the uncertainties associated with the dosimeter itself and allows the task of dose QA to focus on detecting errors in the treatment planning (TPS) and/or delivery systems. This work introduces enhancements to the measurement accuracy of a 3D dosimeter comprised of a helical plane of diodes in a volumetric phantom. Methods: We describe the methods and derivations of new corrections that account for repetition rate dependence, intrinsic relative sensitivity per diode, field size dependence based on the dynamic field size determination, and positional correction. Required and described is an accurate ''virtual inclinometer'' algorithm. The system allows for calibrating the array directly against an ion chamber signal collected with high angular resolution. These enhancements are quantitatively validated using several strategies including ion chamber measurements taken using a ''blank'' plastic shell mimicking the actual phantom, and comparison to high resolution dose calculations for a variety of fields: static, simple arcs, and VMAT. A number of sophisticated treatment planning algorithms were benchmarked against ion chamber measurements for their ability to handle a large air cavity in the phantom. Results: Each calibration correction is quantified and presented vs its independent variable(s). The virtual inclinometer is validated by direct comparison to the gantry angle vs time data from machine log files. The effects of the calibration are quantified and improvements are seen in the dose agreement with the ion chamber reference measurements and with the TPS calculations. These improved agreements are a result of removing prior limitations and assumptions in the calibration methodology. Average gamma analysis passing rates for VMAT plans based on the AAPM TG-119 report are 98.4 and 93

  15. Performance evaluation and calibration of the neuro-pet scanner

    International Nuclear Information System (INIS)

    Sank, V.J.; Brooks, R.A.; Cascio, H.E.; Di Chiro, G.; Friauf, W.S.; Leighton, S.B.

    1983-01-01

    The Neuro-PET is a circular ring seven-slice positron emission tomograph designed for imaging human heads and small animals. The scanner uses 512 bismuth germanate detectors 8.25 mm wide packed tightly together in four layers to achieve high spatial resolution (6-7 mm FWHM) without the use of beam blockers. Because of the small 38 cm ring diameter, the sensitivity is also very high: 70,000 c/s per true slice with medium energy threshold (375 keV) for a 20 cm diameter phantom containing 1 μCi/cc of positron-emitting activity, according to a preliminary measurement. There are three switch-selectable thresholds, and the sensitivity will be higher in the low threshold setting. The Neuro-PET is calibrated with a round or elliptical phantom that approximates a patient's head; this method eliminates the effects of scatter and self-attenuation to first order. Further software corrections for these artifacts are made in the reconstruction program, which reduce the measured scatter to zero, as determined with a 5 cm cold spot. With a 1 cm cold spot, the apparent activity at the center of the cold spot is 18% of the surrounding activity, which is clearly a consequence of the limits of spatial resolution, rather than scatter. The Neuro-PET has been in clinical operation since June 1982, and approximately 30 patients have been scanned to date

  16. Design, performance, and calibration of CMS forward calorimeter wedges

    Energy Technology Data Exchange (ETDEWEB)

    Abdullin, S. [Fermi National Accelerator Lab., Batavia, IL (United States)]|[Univ. of Maryland, College Park, MD (United States); Abramov, V.; Goncharov, P.; Kalinin, A.; Khmelnikov, A.; Korablev, A.; Korneev, Y.; Krinitsyn, A.; Kryshkin, V.; Lukanin, V.; Pikalov, V.; Ryazanov, A.; Talov, V.; Turchanovich, L.; Volkov, A. [IHEP, Protvino (Russian Federation); Acharya, B.; Banerjee, Sud.; Banerjee, Sun.; Chendvankar, S.; Dugad, S.; Kalmani, S.; Katta, S.; Mazumdar, K.; Mondal, N.; Nagaraj, P.; Patil, M.; Reddy, L.; Satyanarayana, B.; Sharma, S.; Verma, P. [Tata Inst. of Fundamental Research, Mumbai (India); Adams, M.; Burchesky, K.; Qiang, W. [Univ. of Illinois, Chicago, IL (United States); Akchurin, N.; Carrell, K.; Guemues, K.; Kim, H.; Spezziga, M.; Thomas, R.; Wigmans, R. [Texas Tech Univ., Dept. of Physics, Lubbock, TX (United States); Akgun, U.; Ayan, S.; Duru, F.; Merlo, J.P.; Mestvirishvili, A.; Miller, M.; Norbeck, E.; Olson, J.; Onel, Y.; Schmidt, I. [Univ. of Iowa, Iowa City, IA (United States); Anderson, E.W.; Hauptman, J. [Iowa State Univ., Ames, IA (United States); Antchev, G.; Arcidy, M.; Hazen, E.; Lawlor, C.; Machado, E.; Posch, C.; Rohlf, J.; Sulak, L.; Varela, F.; Wu, S.X. [Boston Univ., MA (United States); Aydin, S.; Bakirci, M.N.; Cerci, S.; Dumanoglu, I.; Eskut, E.; Kayis-Topaksu, A.; Koylu, S.; Kurt, P.; Kuzucu-Polatoz, A.; Onengut, G.; Ozdes-Koca, N.; Ozkurt, H.; Sogut, K.; Topakli, H.; Vergili, M.; Yetkin, T. [Cukurova Univ., Adana (Turkey); Baarmand, M.; Mermerkaya, H.; Vodopiyanov, I. [Florida Inst. of Tech., Melbourne, FL (United States); Babich, K.; Golutvin, I.; Kalagin, V.; Kosarev, I.; Ladygin, V.; Mescheryakov, G.; Moissenz, P.; Petrosyan, A.; Rogalev, E.; Smirnov, V.; Vishnevskiy, A.; Volodko, A.; Zarubin, A. [JINR, Dubna (Russian Federation); Baden, D.; Bard, R.; Eno, S.; Grassi, T.; Jarvis, C.; Kellogg, R.; Kunori, S.; Skuja, A.; Wang, L.; Wetstein, M. [Univ. of Maryland, College Park, MD (United States)] [and others

    2008-01-15

    We report on the test beam results and calibration methods using high energy electrons, pions and muons with the CMS forward calorimeter (HF). The HF calorimeter covers a large pseudorapidity region (3{<=} vertical stroke {eta} vertical stroke {<=}5), and is essential for a large number of physics channels with missing transverse energy. It is also expected to play a prominent role in the measurement of forward tagging jets in weak boson fusion channels in Higgs production. The HF calorimeter is based on steel absorber with embedded fused-silica-core optical fibers where Cherenkov radiation forms the basis of signal generation. Thus, the detector is essentially sensitive only to the electromagnetic shower core and is highly non-compensating (e/h{approx}5). This feature is also manifest in narrow and relatively short showers compared to similar calorimeters based on ionization. The choice of fused-silica optical fibers as active material is dictated by its exceptional radiation hardness. The electromagnetic energy resolution is dominated by photoelectron statistics and can be expressed in the customary form as (a)/({radical}(E))+b. The stochastic term a is 198% and the constant term b is 9%. The hadronic energy resolution is largely determined by the fluctuations in the neutral pion production in showers, and when it is expressed as in the electromagnetic case, a=280% and b=11%. (orig.)

  17. Students’ Performance Calibration in a Basketball Dibbling Task in Elementary Physical Education

    Directory of Open Access Journals (Sweden)

    Athanasios KOLOVELONIS

    2012-01-01

    Full Text Available The aim of this study was to examine students’ performance calibration in physical education. Onehundred fifth and sixth grade students provided estimations regarding their performance in adribbling test after practicing dribbling for 16 minutes under different self-regulatory conditions (i.e.,receiving feedback, setting goals, self-recording. Two calibration indices, calibration bias andcalibration accuracy, were calculated. The results showed that students who practiced dribbling underdifferent self-regulatory conditions (i.e., receiving feedback, setting goals did not differ in calibrationbias and accuracy. Regardless of the group, students were overconfident. Moreover, sixth gradestudents were more accurate compared to fifth grade students. These results were discussed withreference to the development of performance calibration and self-regulated learning in physicaleducation.

  18. New calibration methodology for calorimetric determination of isobaric thermal expansivity of liquids as a function of temperature and pressure

    Energy Technology Data Exchange (ETDEWEB)

    Navia, Paloma; Troncoso, Jacobo [Departamento de Fisica Aplicada, Facultad de Ciencias de Ourense, Campus As Lagoas, 32004 Ourense (Spain); Romani, Luis [Departamento de Fisica Aplicada, Facultad de Ciencias de Ourense, Campus As Lagoas, 32004 Ourense (Spain)], E-mail: romani@uvigo.es

    2008-11-15

    A new method for determining isobaric thermal expansivity of liquids as a function of temperature and pressure through calorimetric measurements against pressure is described. It is based on a previously reported measurement technique, but due to the different kind of calorimeter and experimental set up, a new calibration procedure was developed. Two isobaric thermal expansivity standards are needed; in this work, with a view on the quality of the available literature data, hexane and water are chosen. The measurements were carried out in the temperature and pressure intervals (278.15 to 348.15) K and (0.5 to 55) MPa for a set of liquids, and experimental values are compared with the available literature data in order to evaluate the precision of the experimental procedure. The analysis of the results reveals that the proposed methodology is highly accurate for isobaric thermal expansivity determination, and it allows obtaining a precise characterisation of the temperature and pressure dependence of this thermodynamic coefficient.

  19. New calibration methodology for calorimetric determination of isobaric thermal expansivity of liquids as a function of temperature and pressure

    International Nuclear Information System (INIS)

    Navia, Paloma; Troncoso, Jacobo; Romani, Luis

    2008-01-01

    A new method for determining isobaric thermal expansivity of liquids as a function of temperature and pressure through calorimetric measurements against pressure is described. It is based on a previously reported measurement technique, but due to the different kind of calorimeter and experimental set up, a new calibration procedure was developed. Two isobaric thermal expansivity standards are needed; in this work, with a view on the quality of the available literature data, hexane and water are chosen. The measurements were carried out in the temperature and pressure intervals (278.15 to 348.15) K and (0.5 to 55) MPa for a set of liquids, and experimental values are compared with the available literature data in order to evaluate the precision of the experimental procedure. The analysis of the results reveals that the proposed methodology is highly accurate for isobaric thermal expansivity determination, and it allows obtaining a precise characterisation of the temperature and pressure dependence of this thermodynamic coefficient

  20. Evaluation of analytical performance based on partial order methodology.

    Science.gov (United States)

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  1. General calibration methodology for a combined Horton-SCS infiltration scheme in flash flood modeling

    Science.gov (United States)

    Gabellani, S.; Silvestro, F.; Rudari, R.; Boni, G.

    2008-12-01

    Flood forecasting undergoes a constant evolution, becoming more and more demanding about the models used for hydrologic simulations. The advantages of developing distributed or semi-distributed models have currently been made clear. Now the importance of using continuous distributed modeling emerges. A proper schematization of the infiltration process is vital to these types of models. Many popular infiltration schemes, reliable and easy to implement, are too simplistic for the development of continuous hydrologic models. On the other hand, the unavailability of detailed and descriptive information on soil properties often limits the implementation of complete infiltration schemes. In this work, a combination between the Soil Conservation Service Curve Number method (SCS-CN) and a method derived from Horton equation is proposed in order to overcome the inherent limits of the two schemes. The SCS-CN method is easily applicable on large areas, but has structural limitations. The Horton-like methods present parameters that, though measurable to a point, are difficult to achieve a reliable estimate at catchment scale. The objective of this work is to overcome these limits by proposing a calibration procedure which maintains the large applicability of the SCS-CN method as well as the continuous description of the infiltration process given by the Horton's equation suitably modified. The estimation of the parameters of the modified Horton method is carried out using a formal analogy with the SCS-CN method under specific conditions. Some applications, at catchment scale within a distributed model, are presented.

  2. Methodology of calibration for nucleonic multiphase meter technology for SAGD extra heavy oil

    Energy Technology Data Exchange (ETDEWEB)

    Pinguet, B.; Pechard, P.; Guerra, E. [Schlumberger Canada Ltd., Calgary, AB (Canada); Arendo, V.; Shaffer, M.; Contreras, J. [Total, Paris (France)

    2008-10-15

    The challenges facing bitumen metering in steam assisted gravity drainage operations were discussed with reference to high operating temperatures, steam pressure in the gas phase, foaming, emulsion and small density differences between bitumen and produced water. A metering tool that can deal with these operating constraints was presented. The multiphase meter (MFM) uses a multi-energy gamma ray (nuclear fraction) meter together with a venturi tube to provide accurate monitoring and optimization of oil, water, gas and steam production. This paper presented the specific strengths of the MFM with emphasis on its ability to correctly meter the liquid/gas phases depending on the calibration method and operating measurement range. The paper presented a study of the main parameters which could influence the measurement associated with this technology. The study was based on practical and simulated data and evaluated the impact of changes in each parameter. The purpose of the paper was to improve the understanding of this technology and how to apply it to bitumen metering and provide a guideline of the technology for future users in the oil industry. It described the combination venturi-nucleonic measurement parameters, such as mass flow rate; fraction meter; solution triangle of the fraction meter; primary and secondary output; fluid properties information; and uncertainty associated to any technology. A sensitivity analysis study to identify the dependency to some key fluid parameters was also described. It was concluded that MFM can be used in a stand-alone configuration. 7 refs., 2 tabs., 22 figs.

  3. Methodology for Gamma cameras calibration for I-131 uptake quantification in Hyperthyroidism diseases

    International Nuclear Information System (INIS)

    Lopez Diaz, A.; Palau San Pedro, A.; Martin Escuela, J. M.; Reynosa Montejo, R.; Castillo, J.; Torres Aroche, L.

    2015-01-01

    Optimization and verification of Patient-Specific Treatment Planning with unsealed I-131 sources is a desirable goal from medical and radiation protection point of view. To obtain a practical protocol to combine the estimation of the related parameters with patient's specific treatment dose in hyperthyroidism disease, 3 equipment were studied (Iodine Probe, a Philips Forte Camera with pin-hole collimators and a Mediso Nucline with HEGP for planar and SPECT techniques) and crossed calibrated. The linear behaviour on diagnostic and therapeutic activity range was verified, showing a linear correlation fitting factor R 2 > 0.99. The differences between thyroid uptake determinations in all equipment were less than 6% for therapeutic activities and less than 1.1% in the diagnostic range. The combined protocol to calculate, with only one administration of I 131 , all the necessary parameters to the treatment dose estimation in 2D or 3D, avoiding wasting time with gamma cameras, was established and verified. Following this protocol the difference between apparent and calculated activities were less than 3%. (Author)

  4. A New Calibration Methodology for Thorax and Upper Limbs Motion Capture in Children Using Magneto and Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Luca Ricci

    2014-01-01

    Full Text Available Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU, that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children’s motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors’ frames of reference into useful kinematic information in the human limbs’ frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs. We will also present a novel cost function for the Levenberg–Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  5. Guidance on the Technology Performance Level (TPL) Assessment Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Jochem [National Renewable Energy Lab. (NREL), Golden, CO (United States); Roberts, Jesse D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Babarit, Aurelien [Ecole Centrale de Nantes (France). Lab. of Research in Hydrodynamics, Energetics and Atmospheric Environment (LHEEA); Costello, Ronan [Wave Venture, Penstraze (United Kingdom); Bull, Diana L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Neilson, Kim [Ramboll, Copenhagen (Denmark); Bittencourt, Claudio [DNV GL, London (United Kingdom); Kennedy, Ben [Wave Venture, Penstraze (United Kingdom); Malins, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dykes, Katherine [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-09-01

    This document presents the revised Technology Performance Level (TPL) assessment methodology. There are three parts to this revised methodology 1) the Stakeholder Needs and Assessment Guidance (this document), 2) the Technical Submission form, 3) the TPL scoring spreadsheet. The TPL assessment is designed to give a technology neutral or agnostic assessment of any wave energy converter technology. The focus of the TPL is on the performance of the technology in meeting the customer’s needs. The original TPL is described in [1, 2] and those references also detail the critical differences in the nature of the TPL when compared to the more widely used technology readiness level (TRL). (Wave energy TRL is described in [3]). The revised TPL is particularly intended to be useful to investors and also to assist technology developers to conduct comprehensive assessments in a way that is meaningful and attractive to investors. The revised TPL assessment methodology has been derived through a structured Systems Engineering approach. This was a formal process which involved analyzing customer and stakeholder needs through the discipline of Systems Engineering. The results of the process confirmed the high level of completeness of the original methodology presented in [1] (as used in the Wave Energy Prize judging) and now add a significantly increased level of detail in the assessment and an improved more investment focused structure. The revised TPL also incorporates the feedback of the Wave Energy Prize judges.

  6. EnergiTools. A methodology for performance monitoring and diagnosis

    International Nuclear Information System (INIS)

    Ancion, P.; Bastien, R.; Ringdahl, K.

    2000-01-01

    EnergiTools is a performance monitoring and diagnostic tool that combines the power of on-line process data acquisition with advanced diagnosis methodologies. Analytical models based on thermodynamic principles are combined with neural networks to validate sensor data and to estimate missing or faulty measurements. Advanced diagnostic technologies are then applied to point out potential faults and areas to be investigated further. The diagnosis methodologies are based on Bayesian belief networks. Expert knowledge is captured in the form of the fault-symptom relationships and includes historical information as the likelihood of faults and symptoms. The methodology produces the likelihood of component failure root causes using the expert knowledge base. EnergiTools is used at Ringhals nuclear power plants. It has led to the diagnosis of various performance issues. Three case studies based on this plant data and model are presented and illustrate the diagnosis support methodologies implemented in EnergiTools . In the first case, the analytical data qualification technique points out several faulty measurements. The application of a neural network for the estimation of the nuclear reactor power by interpreting several plant indicators is then illustrated. The use of the Bayesian belief networks is finally described. (author)

  7. General calibration methodology for a combined Horton-SCS infiltration scheme in flash flood modeling

    Directory of Open Access Journals (Sweden)

    S. Gabellani

    2008-12-01

    Full Text Available Flood forecasting undergoes a constant evolution, becoming more and more demanding about the models used for hydrologic simulations. The advantages of developing distributed or semi-distributed models have currently been made clear. Now the importance of using continuous distributed modeling emerges. A proper schematization of the infiltration process is vital to these types of models. Many popular infiltration schemes, reliable and easy to implement, are too simplistic for the development of continuous hydrologic models. On the other hand, the unavailability of detailed and descriptive information on soil properties often limits the implementation of complete infiltration schemes. In this work, a combination between the Soil Conservation Service Curve Number method (SCS-CN and a method derived from Horton equation is proposed in order to overcome the inherent limits of the two schemes. The SCS-CN method is easily applicable on large areas, but has structural limitations. The Horton-like methods present parameters that, though measurable to a point, are difficult to achieve a reliable estimate at catchment scale. The objective of this work is to overcome these limits by proposing a calibration procedure which maintains the large applicability of the SCS-CN method as well as the continuous description of the infiltration process given by the Horton's equation suitably modified. The estimation of the parameters of the modified Horton method is carried out using a formal analogy with the SCS-CN method under specific conditions. Some applications, at catchment scale within a distributed model, are presented.

  8. Generic Methodology for Field Calibration of Nacelle-Based Wind Lidars

    DEFF Research Database (Denmark)

    Borraccino, Antoine; Courtney, Michael; Wagner, Rozenn

    2016-01-01

    Nacelle-based Doppler wind lidars have shown promising capabilities to assess power performance, detect yaw misalignment or perform feed-forward control. The power curve application requires uncertainty assessment. Traceable measurements and uncertainties of nacelle-based wind lidars can be obtai...

  9. Elementary Students' Metacognitive Processes and Post-Performance Calibration on Mathematical Problem-Solving Tasks

    Science.gov (United States)

    García, Trinidad; Rodríguez, Celestino; González-Castro, Paloma; González-Pienda, Julio Antonio; Torrance, Mark

    2016-01-01

    Calibration, or the correspondence between perceived performance and actual performance, is linked to students' metacognitive and self-regulatory skills. Making students more aware of the quality of their performance is important in elementary school settings, and more so when math problems are involved. However, many students seem to be poorly…

  10. Test Takers' Performance Appraisals, Appraisal Calibration, and Cognitive and Metacognitive Strategy Use

    Science.gov (United States)

    Phakiti, Aek

    2016-01-01

    The current study explores the nature and relationships among test takers' performance appraisals, appraisal calibration, and reported cognitive and metacognitive strategy use in a language test situation. Performance appraisals are executive processes of strategic competence for judging test performance (e.g., evaluating the correctness or…

  11. TRACEABILITY OF ON COORDINATE MEASURING MACHINES – CALIBRATION AND PERFORMANCE VERIFICATION

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Savio, Enrico; Bariani, Paolo

    This document is used in connection with three exercises each of 45 minutes duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measurement traceability: 1) Performance verification of a CMM using a ball bar; 2) Calibration...... of an optical coordinate measuring machine; 3) Uncertainty assessment using the ISO 15530-3 “Calibrated workpieces” procedure....

  12. Development of Testing Methodologies to Evaluate Postflight Locomotor Performance

    Science.gov (United States)

    Mulavara, A. P.; Peters, B. T.; Cohen, H. S.; Richards, J. T.; Miller, C. A.; Brady, R.; Warren, L. E.; Bloomberg, J. J.

    2006-01-01

    Crewmembers experience locomotor and postural instabilities during ambulation on Earth following their return from space flight. Gait training programs designed to facilitate recovery of locomotor function following a transition to a gravitational environment need to be accompanied by relevant assessment methodologies to evaluate their efficacy. The goal of this paper is to demonstrate the operational validity of two tests of locomotor function that were used to evaluate performance after long duration space flight missions on the International Space Station (ISS).

  13. Methodology for the preliminary design of high performance schools in hot and humid climates

    Science.gov (United States)

    Im, Piljae

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the accelerated dissemination of energy efficient design. For the development of the toolkit, first, a survey was performed to identify high performance measures available today being implemented in new K-5 school buildings. Then an existing case-study school building in a hot and humid climate was selected and analyzed to understand the energy use pattern in a school building and to be used in developing a calibrated simulation. Based on the information from the previous step, an as-built and calibrated simulation was then developed. To accomplish this, five calibration steps were performed to match the simulation results with the measured energy use. The five steps include: (1) Using an actual 2006 weather file with measured solar radiation, (2) Modifying lighting & equipment schedule using ASHRAE's RP-1093 methods, (3) Using actual equipment performance curves (i.e., scroll chiller), (4) Using the Winkelmann's method for the underground floor heat transfer, and (5) Modifying the HVAC and room setpoint temperature based on the measured field data. Next, the calibrated simulation of the case-study K-5 school was compared to an ASHRAE Standard 90.1-1999 code-compliant school. In the next step, the energy savings potentials from the application of several high performance measures to an equivalent ASHRAE Standard 90.1-1999 code-compliant school. The high performance measures applied included the recommendations from the ASHRAE Advanced Energy Design Guides (AEDG) for K-12 and other high performance measures from the literature review as well as a daylighting strategy and solar PV and thermal systems. The results show that the net

  14. Performance Testing Methodology for Safety-Critical Programmable Logic Controller

    International Nuclear Information System (INIS)

    Kim, Chang Ho; Oh, Do Young; Kim, Ji Hyeon; Kim, Sung Ho; Sohn, Se Do

    2009-01-01

    The Programmable Logic Controller (PLC) for use in Nuclear Power Plant safety-related applications is being developed and tested first time in Korea. This safety-related PLC is being developed with requirements of regulatory guideline and industry standards for safety system. To test that the quality of the developed PLC is sufficient to be used in safety critical system, document review and various product testings were performed over the development documents for S/W, H/W, and V/V. This paper provides the performance testing methodology and its effectiveness for PLC platform conducted by KOPEC

  15. Evaluation of radionuclide calibrator performance with Tc-99m and I-123 in nuclear medicine centers

    International Nuclear Information System (INIS)

    Ahn, Ji Young; Kim, Gwe Ya; Yang, Hyun Kyu; Lim, Chun Il; Lee, Hyun Koo; Kim, Byung Tae; Jeong, Hee Kyo

    2004-01-01

    To minimize unnecessary radiation dose to patients, it is important to ensure that the radiopharmaceutical administered is accurately measured. Tc-99m is one of the popular radionuclide used in nuclear medicine and I-123 is also used widely in nuclear medicine. To investigate the level of measurement performance and to provide the participants with a traceable standard to check and review their calibration factors for these particular radionuclides, Korean Food and Drug Administration (KFDA) as a national secondary standard dosimetry laboratory conducted comparison program for Tc-99m and I-123 in nuclear medicine centers. 72 nuclear medicine centers (78 calibrators) participated in the comparison program for Tc-99m in 2003 and 37 centers (41 calibrators) for I-123 in 2004. For a comparison, Tc-99m and I-123 were accurately sub-divided into a series of 4 ml aliquots in 10 ml P6 vial and delivered to participants. Participants were invited to assay their P6 vial in each of their radionuclide calibrators and to report their results directly to KFDA. For the evaluation of raionuclide, KFDA used NPL-CRC radionuclide calibrator that is traceable to NPL (National Physical Laboratory) primany standard. The difference between the value reported by the hospital (A h ospital) and of the KFDA (A k fda) is expressed as a percent deviation (DEV (%) = 100 (A h ospital - A k fda)/A k fda). If there were calibrators over 10 % deviations, those were checked again with the same procedure. In Tc-99m, 65% of the calibrators showed deviations within 5 % and 18 % were in the range of 5 % < | DEV | ≤10 %, and 17 % were over 10 % deviations. In I-123, 41 % of the calibrators were within ±5 % and 29 % were in the range of 5 % < | DEV | ≤10 % and 30 % were over 10 %. The results have shown that such comparisons are necessary to improve the accuracy of the measurement and to identify radionuclide calibrators that are malfunctioning

  16. Landsat 8 Operational Land Imager On-Orbit Geometric Calibration and Performance

    Directory of Open Access Journals (Sweden)

    James Storey

    2014-11-01

    Full Text Available The Landsat 8 spacecraft was launched on 11 February 2013 carrying the Operational Land Imager (OLI payload for moderate resolution imaging in the visible, near infrared (NIR, and short-wave infrared (SWIR spectral bands. During the 90-day commissioning period following launch, several on-orbit geometric calibration activities were performed to refine the prelaunch calibration parameters. The results of these calibration activities were subsequently used to measure geometric performance characteristics in order to verify the OLI geometric requirements. Three types of geometric calibrations were performed including: (1 updating the OLI-to-spacecraft alignment knowledge; (2 refining the alignment of the sub-images from the multiple OLI sensor chips; and (3 refining the alignment of the OLI spectral bands. The aspects of geometric performance that were measured and verified included: (1 geolocation accuracy with terrain correction, but without ground control (L1Gt; (2 Level 1 product accuracy with terrain correction and ground control (L1T; (3 band-to-band registration accuracy; and (4 multi-temporal image-to-image registration accuracy. Using the results of the on-orbit calibration update, all aspects of geometric performance were shown to meet or exceed system requirements.

  17. Design, calibration, and performance of the MINERvA detector

    Energy Technology Data Exchange (ETDEWEB)

    Aliaga, L. [Department of Physics, College of William and Mary, Williamsburg, VA 23187 (United States); Sección Física, Departamento de Ciencias, Pontificia Universidad Católica del Perú, Apartado 1761, Lima, Perú (Peru); Bagby, L.; Baldin, B. [Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Baumbaugh, A. [Sección Física, Departamento de Ciencias, Pontificia Universidad Católica del Perú, Apartado 1761, Lima, Perú (Peru); Bodek, A.; Bradford, R. [University of Rochester, Rochester, NY 14610 (United States); Brooks, W.K. [Departamento de Física, Universidad Técnica Federico Santa María, Avda. España 1680, Casilla 110-V, Valparaíso (Chile); Boehnlein, D. [Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Boyd, S. [Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Budd, H. [University of Rochester, Rochester, NY 14610 (United States); Butkevich, A. [Institute for Nuclear Research of the Russian Academy of Sciences, 117312 Moscow (Russian Federation); Martinez Caicedo, D.A.; Castromonte, C.M. [Hampton University, Department of Physics, Hampton, VA 23668 (United States); Christy, M.E. [Department of Physics, University of Minnesota – Duluth, Duluth, MN 55812 (United States); Chvojka, J. [University of Rochester, Rochester, NY 14610 (United States); Motta, H. da [Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud 150, Urca, Rio de Janeiro, RJ 22290-180 (Brazil); and others

    2014-04-11

    The MINERvA experiment is designed to perform precision studies of neutrino-nucleus scattering using ν{sub μ} and ν{sup ¯}{sub μ} neutrinos incident at 1–20 GeV in the NuMI beam at Fermilab. This article presents a detailed description of the MINERvA detector and describes the ex situ and in situ techniques employed to characterize the detector and monitor its performance. The detector is composed of a finely segmented scintillator-based inner tracking region surrounded by electromagnetic and hadronic sampling calorimetry. The upstream portion of the detector includes planes of graphite, iron and lead interleaved between tracking planes to facilitate the study of nuclear effects in neutrino interactions. Observations concerning the detector response over sustained periods of running are reported. The detector design and methods of operation have relevance to future neutrino experiments in which segmented scintillator tracking is utilized.

  18. Calibration and performance of the ATLAS Tile Calorimeter during the Run 2 of the LHC

    CERN Document Server

    Solovyanov, Oleg; The ATLAS collaboration

    2017-01-01

    The Tile Calorimeter (TileCal) is a hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. It is a non-compensating sampling calorimeter comprised of steel and scintillating plastic tiles which are read-out by photomultiplier tubes (PMTs). The TileCal is regularly monitored and calibrated by several different calibration systems: a Cs radioactive source that illuminates the scintillating tiles directly, a laser light system to directly test the PMT response and a charge injection system (CIS) for the front-end electronics. These calibrations systems, in conjunction with data collected during proton-proton collisions, provide extensive monitoring of the instrument and a means for equalising the calorimeter response at each stage of the signal propagation. The performance of the calorimeter and its calibration has been established with cosmic ray muons and the large sample of the proton-proton collisions to study the energy response at the electromagnetic scale, probe of the hadron...

  19. Methodology for determining influence of organizational culture to business performance

    Directory of Open Access Journals (Sweden)

    Eva Skoumalová

    2007-01-01

    Full Text Available Content this article is to propose the possible methodology for quantitative measuring the organizational culture using the set of statistical methods. In view of aim we elected procedure consisting of two major sections. The first is classification of organizational culture and role of quantitative measurement on organizational culture. This part includes definition and several methods used to classify organizational culture: Hofstede, Peters and Waterman, Deal and Kennedy, Edgar Schein, Kotter and Heskett, Lukášová and opinions why a measurement perspective is worthwhile. The second major section contains methodology for measuring the organizational culture and its impact on organizational performance. We suggest using structural equation modeling for quantitative assessment of organizational culture.

  20. Practice for characterization and performance of a high-dose radiation dosimetry calibration laboratory

    International Nuclear Information System (INIS)

    2003-01-01

    This practice addresses the specific requirements for laboratories engaged in dosimetry calibrations involving ionizing radiation, namely, gamma-radiation, electron beams or X-radiation (bremsstrahlung) beams. It specifically describes the requirements for the characterization and performance criteria to be met by a high-dose radiation dosimetry calibration laboratory. The absorbed-dose range is typically between 10 and 10 5 Gy. This practice addresses criteria for laboratories seeking accreditation for performing high-dose dosimetry calibrations, and is a supplement to the general requirements described in ISO/IEC 17025. By meeting these criteria and those in ISO/IEC 17025, the laboratory may be accredited by a recognized accreditation organization. Adherence to these criteria will help to ensure high standards of performance and instill confidence regarding the competency of the accredited laboratory with respect to the services it offers

  1. Editorial Changes and Item Performance: Implications for Calibration and Pretesting

    Directory of Open Access Journals (Sweden)

    Heather Stoffel

    2014-11-01

    Full Text Available Previous research on the impact of text and formatting changes on test-item performance has produced mixed results. This matter is important because it is generally acknowledged that any change to an item requires that it be recalibrated. The present study investigated the effects of seven classes of stylistic changes on item difficulty, discrimination, and response time for a subset of 65 items that make up a standardized test for physician licensure completed by 31,918 examinees in 2012. One of two versions of each item (original or revised was randomly assigned to examinees such that each examinee saw only two experimental items, with each item being administered to approximately 480 examinees. The stylistic changes had little or no effect on item difficulty or discrimination; however, one class of edits -' changing an item from an open lead-in (incomplete statement to a closed lead-in (direct question -' did result in slightly longer response times. Data for nonnative speakers of English were analyzed separately with nearly identical results. These findings have implications for the conventional practice of repretesting (or recalibrating items that have been subjected to minor editorial changes.

  2. Methodology for quantitative evalution of diagnostic performance. Project III

    International Nuclear Information System (INIS)

    Metz, C.E.

    1985-01-01

    Receiver Operation Characteristic (ROC) methodology is now widely recognized as the most satisfactory approach to the problem of measuring and specifying the performance of a diagnostic procedure. The primary advantage of ROC analysis over alternative methodologies is that it seperates differences in diagnostic accuracy that are due to actual differences in discrimination capacity from those that are due to decision threshold effects. Our effort during the past year has been devoted to developing digital computer programs for fitting ROC curves to diagnostic data by maximum likelihood estimation and to developing meaningful and valid statistical tests for assessing the significance of apparent differences between measured ROC curves. FORTRAN programs previously written here for ROC curve fitting and statistical testing have been refined to make them easier to use and to allow them to be run in a large variety of computer systems. We have attempted also to develop two new curve-fitting programs: one for conventional ROC data that assumes a different functional form for the ROC curve, and one that can be used for ''free-response'' ROC data. Finally, we have cooperated with other investigators to apply our techniques to analyze ROC data generated in clinical studies, and we have sought to familiarize the medical community with the advantages of ROC methodology. 36 ref

  3. Calibrations and verifications performed in view of the ILA reinstatement at JET

    Energy Technology Data Exchange (ETDEWEB)

    Dumortier, P., E-mail: pierre.dumortier@rma.ac.be; Durodié, F. [LPP-ERM-KMS, TEC partner, Brussels (Belgium); Helou, W. [CEA, IRFM, F-13108 St-Paul-Lez-Durance (France); Monakhov, I.; Noble, C.; Wooldridge, E.; Blackman, T.; Graham, M. [CCFE, Culham Science Centre, Abingdon (United Kingdom); Collaboration: EUROfusion Consortium

    2015-12-10

    The calibrations and verifications that are performed in preparation of the ITER-Like antenna (ILA) reinstatement at JET are reviewed. A brief reminder of the ILA system layout is given. The different calibration methods and results are then discussed. They encompass the calibrations of the directional couplers present in the system, the determination of the relation between the capacitor position readings and the capacitance value, the voltage probes calibration inside the antenna housing, the RF cables characterization and the acquisition electronics circuit calibration. Earlier experience with the ILA has shown that accurate calibrations are essential for the control of the full ILA close-packed antenna array, its protection through the S-Matrix Arc Detection and the new second stage matching algorithm to be implemented. Finally the voltage stand-off of the capacitors is checked and the phase range achievable with the system is verified. The system layout is modified as to allow dipole operation over the whole operating frequency range when operating with the 3dB combiner-splitters.

  4. Design Methodology and Performance Evaluation of New Generation Sounding Rockets

    Directory of Open Access Journals (Sweden)

    Marco Pallone

    2018-01-01

    Full Text Available Sounding rockets are currently deployed for the purpose of providing experimental data of the upper atmosphere, as well as for microgravity experiments. This work provides a methodology in order to design, model, and evaluate the performance of new sounding rockets. A general configuration composed of a rocket with four canards and four tail wings is sized and optimized, assuming different payload masses and microgravity durations. The aerodynamic forces are modeled with high fidelity using the interpolation of available data. Three different guidance algorithms are used for the trajectory integration: constant attitude, near radial, and sun-pointing. The sun-pointing guidance is used to obtain the best microgravity performance while maintaining a specified attitude with respect to the sun, allowing for experiments which are temperature sensitive. Near radial guidance has instead the main purpose of reaching high altitudes, thus maximizing the microgravity duration. The results prove that the methodology at hand is straightforward to implement and capable of providing satisfactory performance in term of microgravity duration.

  5. Development and testing of the methodology for performance requirements

    International Nuclear Information System (INIS)

    Rivers, J.D.

    1989-01-01

    The U.S. Department of Energy (DOE) is in the process of implementing a set of materials control and accountability (MC ampersand A) performance requirements. These graded requirements set a uniform level of performance for similar materials at various facilities against the threat of an insider adversary stealing special nuclear material (SNM). These requirements are phrased in terms of detecting the theft of a goal quantity of SNM within a specified time period and with a probability greater than or equal to a special value and include defense-in-depth requirements. The DOE has conducted an extensive effort over the last 2 1/2 yr to develop a practical methodology to be used in evaluating facility performance against the performance requirements specified in DOE order 5633.3. The major participants in the development process have been the Office of Safeguards and Security (OSS), Brookhaven National Laboratory, and Los Alamos National Laboratory. The process has included careful reviews of related evaluation systems, a review of the intent of the requirements in the order, and site visits to most of the major facilities in the DOE complex. As a result of this extensive effort to develop guidance for the MC ampersand A performance requirements, OSS was able to provide a practical method that will allow facilities to evaluate the performance of their safeguards systems against the performance requirements. In addition, the evaluations can be validated by the cognizant operations offices in a systematic manner

  6. Quality control of CT units - methodology of performance I

    International Nuclear Information System (INIS)

    Prlic, I.; Radalj, Z.

    1996-01-01

    Increasing use of x-ray computed tomography systems (CT scanners) in the diagnostic requires an efficient means of evaluating the performance of them. Therefore, this paper presents the way to measure (Quality Control procedure-Q/C) and define the CT scanner performance through a special phantom which is based on the recommendation of the American association of Physicists in Medicine (AAPM). The performance parameters measurable with the phantom represent the capability, so periodical evaluation of the parameters enable the users to recognize the stability of the CT scanner no matter on the manufacturer, model or software option of the scanner. There are five important performance parameters which are to be measured: Noise, Contrast scale, Nominal tomographic section thickness, High and Low contrast resolution (MTF). The sixth parameter is, of course the dose per scan and slice which gives the patient dose for the certain diagnostic procedure. The last but not the least parameter is the final image quality which is given through the image processing device connected to the scanner. This is the final medical information needed for the good medical practice according to the Quality Assurance (Q/A) procedures in diagnostic radiology. We have to assure the results of the performance evaluation without environmental influences (the measurements are to be made under the certain conditions according Q/A). This paper will give no detailed methodology recipe but will show on the one example; the system noise measurements and linearity; the need and relevant results of the measurements.1 The rest of the methodology is to be published. (author)

  7. A development of containment performance analysis methodology using GOTHIC code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B. C.; Yoon, J. I. [Future and Challenge Company, Seoul (Korea, Republic of); Byun, C. S.; Lee, J. Y. [Korea Electric Power Research Institute, Taejon (Korea, Republic of); Lee, J. Y. [Seoul National University, Seoul (Korea, Republic of)

    2003-10-01

    In a circumstance that well-established containment pressure/temperature analysis code, CONTEMPT-LT treats the reactor containment as a single volume, this study introduces, as an alternative, the GOTHIC code for an usage on multi-compartmental containment performance analysis. With a developed GOTHIC methodology, its applicability is verified for containment performance analysis for Korean Nuclear Unit 1. The GOTHIC model for this plant is simply composed of 3 compartments including the reactor containment and RWST. In addition, the containment spray system and containment recirculation system are simulated. As a result of GOTHIC calculation, under the same assumptions and conditions as those in CONTEMPT-LT, the GOTHIC prediction shows a very good result; pressure and temperature transients including their peaks are nearly the same. It can be concluded that the GOTHIC could provide reasonable containment pressure and temperature responses if considering the inherent conservatism in CONTEMPT-LT code.

  8. A development of containment performance analysis methodology using GOTHIC code

    International Nuclear Information System (INIS)

    Lee, B. C.; Yoon, J. I.; Byun, C. S.; Lee, J. Y.; Lee, J. Y.

    2003-01-01

    In a circumstance that well-established containment pressure/temperature analysis code, CONTEMPT-LT treats the reactor containment as a single volume, this study introduces, as an alternative, the GOTHIC code for an usage on multi-compartmental containment performance analysis. With a developed GOTHIC methodology, its applicability is verified for containment performance analysis for Korean Nuclear Unit 1. The GOTHIC model for this plant is simply composed of 3 compartments including the reactor containment and RWST. In addition, the containment spray system and containment recirculation system are simulated. As a result of GOTHIC calculation, under the same assumptions and conditions as those in CONTEMPT-LT, the GOTHIC prediction shows a very good result; pressure and temperature transients including their peaks are nearly the same. It can be concluded that the GOTHIC could provide reasonable containment pressure and temperature responses if considering the inherent conservatism in CONTEMPT-LT code

  9. Design Methodology And Performance Studies Of A Flexible Electrotextile Surface

    Directory of Open Access Journals (Sweden)

    Kayacan Ozan

    2015-09-01

    Full Text Available ‘The smart textiles’ concept has to develop products based not only on design, fashion and comfort but also in terms of functions. The novel electro-textiles in the market open up new trends in smart and interactive gadgets. ‘Easy to care and durability’ properties are among the most important features of these products. On the other hand, wearable electronic knitwear has been gaining the attention of both researchers and industrial sectors. Combining knitting technology with electronics may become a dominant trend in the future because of the wide application possibilities. This research is concerned primarily with the design methodology of knitted fabrics containing electrically conductive textiles and especially in-use performance studies. The structural characteristics of the fabrics have been evaluated to enhance the performance properties.

  10. Performance evaluation methodology for historical document image binarization.

    Science.gov (United States)

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  11. Performance and calibration of the CHORUS scintillating fiber tracker and opto-electronics readout system

    International Nuclear Information System (INIS)

    Annis, P.; Aoki, S.; Brunner, J.; De Jong, M.; Fabre, J.P.; Ferreira, R.; Flegel, W.; Frekers, D.; Gregoire, G.; Herin, J.; Kobayashi, M.; Konijn, J.; Lemaitre, V.; Macina, D.; Meijer Drees, R.; Meinhard, H.; Michel, L.; Mommaert, C.; Nakamura, K.; Nakamura, M.; Nakano, T.; Niwa, K.; Niu, E.; Panman, J.; Riccardi, F.; Rondeshagen, D.; Sato, O.; Stefanini, G.; Vander Donckt, M.; Vilain, P.; Wilquet, G.; Winter, K.; Wong, H.T.

    1995-01-01

    An essential component of the CERN WA95/CHORUS experiment is a scintillating fiber tracker system for precise track reconstruction of particles. The tracker design, its opto-electronics readout and calibration system are discussed. Performances of the detector are presented. (orig.)

  12. Development and performance of a calibration system for a large calorimeter array

    International Nuclear Information System (INIS)

    Arenton, M.; Dawson, J.; Ditzler, W.R.

    1982-01-01

    Experiment 609 at Fermilab is a study of the properties of high-p/sub t/ collisions using a large segmented hadron calorimeter. The calibration and monitoring of such a large calorimeter array is a difficult undertaking. This paper describes the systems developed by E609 for automatic monitoring of the phototube gains and performance of the associated electronics

  13. Application of transient analysis methodology to heat exchanger performance monitoring

    International Nuclear Information System (INIS)

    Rampall, I.; Soler, A.I.; Singh, K.P.; Scott, B.H.

    1994-01-01

    A transient testing technique is developed to evaluate the thermal performance of industrial scale heat exchangers. A Galerkin-based numerical method with a choice of spectral basis elements to account for spatial temperature variations in heat exchangers is developed to solve the transient heat exchanger model equations. Testing a heat exchanger in the transient state may be the only viable alternative where conventional steady state testing procedures are impossible or infeasible. For example, this methodology is particularly suited to the determination of fouling levels in component cooling water system heat exchangers in nuclear power plants. The heat load on these so-called component coolers under steady state conditions is too small to permit meaningful testing. An adequate heat load develops immediately after a reactor shutdown when the exchanger inlet temperatures are highly time-dependent. The application of the analysis methodology is illustrated herein with reference to an in-situ transient testing carried out at a nuclear power plant. The method, however, is applicable to any transient testing application

  14. Characteristics of X ray calibration fields for performance test of radiation measuring instruments

    International Nuclear Information System (INIS)

    Shimizu, Shigeru; Takahashi, Fumiaki; Sawahata, Tadahiro; Tohnami, Kohichi; Kikuchi, Hiroshi; Murayama, Takashi

    1999-02-01

    Performance test and calibration of the radiation measuring instruments for low energy photons are made using the X ray calibration fields which are monochromatically characterized by filtration of continuous X ray spectrum. The X ray calibration field needs to be characterized by some quality conditions such as quality index and homogeneity coefficient. The present report describes quality conditions, spectrum and some characteristics of X ray irradiation fields in the Facility of Radiation Standard of the Japan Atomic Energy Research Institute (FRS-JAERI). Fifty nine X ray qualities with the quality index of 0.6, 0.7, 0.8 and 0.9 were set for the tube voltages between 10 kV and 350 kV. Estimation of X ray spectrum measured with a Ge detector was made in terms of exposure, ambient dose equivalent and fluence for all the obtained qualities. Practical irradiation field was determined as the dose distribution uniformity is within ±3%. The obtained results improve the quality of X ray calibration fields and calibration accuracy. (author)

  15. Calibration and Performance of the ATLAS Tile Calorimeter During the LHC Run 2

    CERN Document Server

    Cerda Alberich, Leonor; The ATLAS collaboration

    2017-01-01

    The Tile Calorimeter (TileCal) is the hadronic sampling calorimeter of ATLAS experiment at the Large Hadron Collider (LHC). TileCal uses iron absorbers and scintillators as active material and it covers the central region |η| < 1.7. Jointly with the other calorimeters it is designed for measurements of hadrons, jets, tau-particles and missing transverse energy. It also assists in muon identification. TileCal is regularly monitored and calibrated by several different calibration systems: a Cs radioactive source that illuminates the scintillating tiles directly, a laser light system to directly test the PMT response, and a charge injection system (CIS) for the front-end electronics. These calibrations systems, in conjunction with data collected during proton-proton collisions, provide extensive monitoring of the instrument and a means for equalizing the calorimeter response at each stage of the signal propagation. The performance of the calorimeter has been established with cosmic ray muons and the large sa...

  16. Results of the PEP`93 intercomparison of reference cell calibrations and newer technology performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Osterwald, C.R.; Emery, K. [National Renewable Energy Lab., Golden, CO (United States); Anevsky, S. [All-Union Research Inst. for Optophysical Measurements, Moscow (Russian Federation)] [and others

    1996-05-01

    This paper presents the results of an international intercomparison of photovoltaic (PV) performance measurements and calibrations. The intercomparison, which was organized and operated by a group of experts representing national laboratories from across the globe (i.e., the authors of this paper), was accomplished by circulating two sample sets. One set consisted of twenty silicon reference cells that would, hopefully, form the basis of an international PV reference scale. A qualification procedure applied to the calibration results gave average calibration numbers with an overall standard deviation of less than 2% for the entire set. The second set was assembled from a wide range of newer technologies that present unique problems for PV measurements. As might be expected, these results showed much larger differences among laboratories. Methods were then identified that should be used to measure such devices, along with problems to avoid.

  17. Methodology for assessing performance of waste management systems

    International Nuclear Information System (INIS)

    Meshkov, N.K.; Herzenberg, C.L.; Camasta, S.F.

    1988-01-01

    The newly revised draft DOE Order 5820.2, Chapter 3, requires that DOE low-level waste shall be managed on a systematic basis using the most appropriate combination of waste generation reduction, segregation, treatment, and disposal practices so that the radioactive components are contained and the overall cost effectiveness is minimized. This order expects each site to prepare and maintain an overall waste management systems performance assessment supporting the combination of waste management practices used in generation reduction segregation, treatment, packaging, storage, and disposal. A document prepared by EG and G Idaho, Inc. for the Department of Energy called Guidance for Conduct of Waste Management Systems Performance Assessment is specifically intended to provide the approach necessary to meet the systems performance assessment requirement of DOE Order 5820.2, Chapter 3, and other applicable state regulations dealing with LLW (low-level radioactive wastes). Methods and procedures are needed for assessing the performance of a waste management system. This report addresses this need. The purpose of the methodology provided in this report is to select the optimal way to manage particular sets of waste streams from generation to disposal in a safe and cost-effective manner, and thereby assist the DOE LLW mangers in complying with the DOE Order 5820.2, Chapter 3, and the associated guidance document

  18. Preliminary assessment of the operating performance and calibration of rectilinear scanners and dose calibrators in Rio de Janeiro - Brazil

    International Nuclear Information System (INIS)

    Mendes, L.; Wegst, A.

    1983-01-01

    Thirty rectilinear scanners and ten dose calibrators were tested for a variety of operating conditions. The test for rectilinear scanners were based on image quality obtained with phantoms of the brain, liver (Williams phantom) and thyroid. The parameters investigated specifically for rectilinear scanners included those under direct control of the operator, such as proper setting of the focal distance, the velocity, the photopeak calibration, contrast correct collimator, line spacing, and background count. (E.G.) [pt

  19. Development of Human Performance Analysis and Advanced HRA Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dea; Park, Jin Kyun; Kim, Jae Whan; Kim, Seong Whan; Kim, Man Cheol; Ha, Je Joo

    2007-06-15

    The purpose of this project is to build a systematic framework that can evaluate the effect of human factors related problems on the safety of nuclear power plants (NPPs) as well as develop a technology that can be used to enhance human performance. The research goal of this project is twofold: (1) the development of a human performance database and a framework to enhance human performance, and (2) the analysis of human error with constructing technical basis for human reliability analysis. There are three kinds of main results of this study. The first result is the development of a human performance database, called OPERA-I/II (Operator Performance and Reliability Analysis, Part I and Part II). In addition, a standard communication protocol was developed based on OPERA to reduce human error caused from communication error in the phase of event diagnosis. Task complexity (TACOM) measure and the methodology of optimizing diagnosis procedures were also finalized during this research phase. The second main result is the development of a software, K-HRA, which is to support the standard HRA method. Finally, an advanced HRA method named as AGAPE-ET was developed by combining methods MDTA (misdiagnosis tree analysis technique) and K-HRA, which can be used to analyze EOC (errors of commission) and EOO (errors of ommission). These research results, such as OPERA-I/II, TACOM, a standard communication protocol, K-HRA and AGAPE-ET methods will be used to improve the quality of HRA and to enhance human performance in nuclear power plants.

  20. Development of Human Performance Analysis and Advanced HRA Methodology

    International Nuclear Information System (INIS)

    Jung, Won Dea; Park, Jin Kyun; Kim, Jae Whan; Kim, Seong Whan; Kim, Man Cheol; Ha, Je Joo

    2007-06-01

    The purpose of this project is to build a systematic framework that can evaluate the effect of human factors related problems on the safety of nuclear power plants (NPPs) as well as develop a technology that can be used to enhance human performance. The research goal of this project is twofold: (1) the development of a human performance database and a framework to enhance human performance, and (2) the analysis of human error with constructing technical basis for human reliability analysis. There are three kinds of main results of this study. The first result is the development of a human performance database, called OPERA-I/II (Operator Performance and Reliability Analysis, Part I and Part II). In addition, a standard communication protocol was developed based on OPERA to reduce human error caused from communication error in the phase of event diagnosis. Task complexity (TACOM) measure and the methodology of optimizing diagnosis procedures were also finalized during this research phase. The second main result is the development of a software, K-HRA, which is to support the standard HRA method. Finally, an advanced HRA method named as AGAPE-ET was developed by combining methods MDTA (misdiagnosis tree analysis technique) and K-HRA, which can be used to analyze EOC (errors of commission) and EOO (errors of ommission). These research results, such as OPERA-I/II, TACOM, a standard communication protocol, K-HRA and AGAPE-ET methods will be used to improve the quality of HRA and to enhance human performance in nuclear power plants

  1. Performance and calibration of wave length shifting fibers for K2K SciBar detector

    International Nuclear Information System (INIS)

    Morita, Taichi

    2004-01-01

    The wave length shifting (WLS) fibers (Kuraray Y11 (200) MS) are used for light collection from scintillators in the SciBar detector. The performance of WLS fibers was measured before installation. Because the number of WLS fibers is about 15,000, it is necessary to make a system to measure attenuation length of WLS fibers efficiently. I will report the pre-calibration method for measurement and the performance of the WLS fibers in SciBar detector. (author)

  2. Better Drumming Through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization

    OpenAIRE

    Murphy, Jim; Kapur, Ajay; Carnegie, Dale

    2012-01-01

    A problem with many contemporary musical robotic percussion systems lies in the fact that solenoids fail to respond lin-early to linear increases in input velocity. This nonlinearity forces performers to individually tailor their compositions to specific robotic drummers. To address this problem, we introduce a method of pre-performance calibration using metaheuristic search techniques. A variety of such techniques are introduced and evaluated and the results of the optimized solenoid-based p...

  3. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  4. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  5. Calibration and performance of the ATLAS Tile Calorimeter during the LHC Run 2

    Science.gov (United States)

    Cerda Alberich, L.

    2018-02-01

    The Tile Calorimeter (TileCal) is the hadronic sampling calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC). TileCal uses iron absorbers and scintillators as active material and it covers the central region | η| < 1.7. Jointly with the other sub-detectors it is designed for measurements of hadrons, jets, tau-particles and missing transverse energy. It also assists in muon identification. TileCal is regularly monitored and calibrated by several different calibration systems: a Cs radioactive source, a laser light system to check the PMT response, and a charge injection system (CIS) to check the front-end electronics. These calibration systems, in conjunction with data collected during proton-proton collisions, Minimum Bias (MB) events, provide extensive monitoring of the instrument and a means for equalizing the calorimeter response at each stage of the signal propagation. The performance of the calorimeter has been established with cosmic ray muons and the large sample of the proton-proton collisions and compared to Monte Carlo (MC) simulations. The response of high momentum isolated muons is also used to study the energy response at the electromagnetic scale, isolated hadrons are used as a probe of the hadronic response. The calorimeter time resolution is studied with multijet events. A description of the different TileCal calibration systems and the results on the calorimeter performance during the LHC Run 2 are presented. The results on the pile-up noise and response uniformity studies are also discussed.

  6. A parallel calibration utility for WRF-Hydro on high performance computers

    Science.gov (United States)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  7. Impact of SPECT corrections on 3D-dosimetry for liver transarterial radioembolization using the patient relative calibration methodology

    Energy Technology Data Exchange (ETDEWEB)

    Pacilio, Massimiliano, E-mail: mpacilio@scamilloforlanini.rm.it; Basile, Chiara [Department of Medical Physics, Azienda Ospedaliera San Camillo Forlanini, Rome 00152 (Italy); Ferrari, Mahila; Botta, Francesca; Cremonesi, Marta [Department of Medical Physics, Istituto Europeo di Oncologia, Milan 20141 (Italy); Chiesa, Carlo [Department of Nuclear Medicine, Istituto Nazionale Tumori IRCCS Foundation, Milan 20133 (Italy); Lorenzon, Leda; Becci, Domenico [Postgraduate School of Medical Physics, “Sapienza” University of Rome, Rome 00185 (Italy); Mira, Marta [Post graduate Health Physics School, University of Milan, Milan 20122 (Italy); Torres, Leonel Alberto; Vergara Gil, Alex [Department of Nuclear Medicine, Clinical Research Division of the Center of Isotopes (DIC-CENTIS), Havana 11100 (Cuba); Coca Perez, Marco [Department of PET-CT and Nuclear Medicine, Imaging Center Medscan-Concepciòn, Concepciòn 4070061 (Chile); Ljungberg, Michael [Department of Medical Radiation Physics, University of Lund, Lund 22100 (Sweden); Pani, Roberto [Department of Medico-surgical Sciences and Biotecnologies, “Sapienza” University of Rome, Rome 00185 (Italy)

    2016-07-15

    Purpose: Many centers aim to plan liver transarterial radioembolization (TARE) with dosimetry, even without CT-based attenuation correction (AC), or with unoptimized scatter correction (SC) methods. This work investigates the impact of presence vs absence of such corrections, and limited spatial resolution, on 3D dosimetry for TARE. Methods: Three voxelized phantoms were derived from CT images of real patients with different body sizes. Simulations of {sup 99m}Tc-SPECT projections were performed with the SIMIND code, assuming three activity distributions in the liver: uniform, inside a “liver’s segment,” or distributing multiple uptaking nodules (“nonuniform liver”), with a tumoral liver/healthy parenchyma ratio of 5:1. Projection data were reconstructed by a commercial workstation, with OSEM protocol not specifically optimized for dosimetry (spatial resolution of 12.6 mm), with/without SC (optimized, or with parameters predefined by the manufacturer; dual energy window), and with/without AC. Activity in voxels was calculated by a relative calibration, assuming identical microspheres and {sup 99m}Tc-SPECT counts spatial distribution. 3D dose distributions were calculated by convolution with {sup 90}Y voxel S-values, assuming permanent trapping of microspheres. Cumulative dose-volume histograms in lesions and healthy parenchyma from different reconstructions were compared with those obtained from the reference biodistribution (the “gold standard,” GS), assessing differences for D95%, D70%, and D50% (i.e., minimum value of the absorbed dose to a percentage of the irradiated volume). γ tool analysis with tolerance of 3%/13 mm was used to evaluate the agreement between GS and simulated cases. The influence of deep-breathing was studied, blurring the reference biodistributions with a 3D anisotropic gaussian kernel, and performing the simulations once again. Results: Differences of the dosimetric indicators were noticeable in some cases, always negative

  8. Calibration and Performance of the ATLAS Tile Calorimeter During the Run 2 of the LHC

    CERN Document Server

    Solovyanov, Oleg; The ATLAS collaboration

    2017-01-01

    The Tile Calorimeter (TileCal) is a hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. It is a non-compensating sampling calorimeter comprised of steel and scintillating plastic tiles which are read-out by photomultiplier tubes (PMT). The TileCal is regularly monitored and calibrated by several di erent calibration systems: a Cs radioactive source that illuminates the scintillating tiles directly, a laser light system to directly test the PMT response, and a charge injection system (CIS) for the front-end electronics. These calibrations systems, in conjunction with data collected during proton-proton collisions, provide extensive monitoring of the instrument and a means for equalizing the calorimeter response at each stage of the signal propagation. The performance of the calorimeter and its calibration has been established with cosmic ray muons and the large sample of the proton-proton collisions to study the energy response at the electromagnetic scale, probe of the hadroni...

  9. RANS Based Methodology for Predicting the Influence of Leading Edge Erosion on Airfoil Performance

    Energy Technology Data Exchange (ETDEWEB)

    Langel, Christopher M. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; Chow, Raymond C. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; van Dam, C. P. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; Maniaci, David Charles [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Wind Energy Technologies Dept.

    2017-10-01

    The impact of surface roughness on flows over aerodynamically designed surfaces is of interested in a number of different fields. It has long been known the surface roughness will likely accelerate the laminar- turbulent transition process by creating additional disturbances in the boundary layer. However, there are very few tools available to predict the effects surface roughness will have on boundary layer flow. There are numerous implications of the premature appearance of a turbulent boundary layer. Increases in local skin friction, boundary layer thickness, and turbulent mixing can impact global flow properties compounding the effects of surface roughness. With this motivation, an investigation into the effects of surface roughness on boundary layer transition has been conducted. The effort involved both an extensive experimental campaign, and the development of a high fidelity roughness model implemented in a R ANS solver. Vast a mounts of experimental data was generated at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel for the calibration and validation of the roughness model described in this work, as well as future efforts. The present work focuses on the development of the computational model including a description of the calibration process. The primary methodology presented introduces a scalar field variable and associated transport equation that interacts with a correlation based transition model. The additional equation allows for non-local effects of surface roughness to be accounted for downstream of rough wall sections while maintaining a "local" formulation. The scalar field is determined through a boundary condition function that has been calibrated to flat plate cases with sand grain roughness. The model was initially tested on a NACA 0012 airfoil with roughness strips applied to the leading edge. Further calibration of the roughness model was performed using results from the companion experimental study on a NACA 633 -418 airfoil

  10. Variations in performance of LCDs are still evident after DICOM gray-scale standard display calibration.

    LENUS (Irish Health Repository)

    Lowe, Joanna M

    2010-07-01

    Quality assurance in medical imaging is directly beneficial to image quality. Diagnostic images are frequently displayed on secondary-class displays that have minimal or no regular quality assurance programs, and treatment decisions are being made from these display types. The purpose of this study is to identify the impact of calibration on physical and psychophysical performance of liquid crystal displays (LCDs) and the extent of potential variance across various types of LCDs.

  11. Caffeine and cognitive performance: persistent methodological challenges in caffeine research.

    Science.gov (United States)

    James, Jack E

    2014-09-01

    Human cognitive performance is widely perceived to be enhanced by caffeine at usual dietary doses. However, the evidence for and against this belief continues to be vigorously contested. Controversy has centred on caffeine withdrawal and withdrawal reversal as potential sources of experimental confounding. In response, some researchers have enlisted "caffeine-naïve" experimental participants (persons alleged to consume little or no caffeine) assuming that they are not subject to withdrawal. This mini-review examines relevant research to illustrate general methodological challenges that have been the cause of enduring confusion in caffeine research. At issue are the processes of caffeine withdrawal and withdrawal reversal, the definition of caffeine-naïve, the population representativeness of participants deemed to be caffeine-naïve, and confounding due to caffeine tolerance. Attention to these processes is necessary if premature conclusions are to be avoided, and if caffeine's complex effects and the mechanisms responsible for those effects are to be illuminated. Strategies are described for future caffeine research aimed at minimising confounding from withdrawal and withdrawal reversal. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Development and operational performance of a single calibration chamber for radon detectors

    International Nuclear Information System (INIS)

    Lopez-Coto, I.; Bolivar, J.P.; Mas, J.L.; Garcia-Tenorio, R.; Vargas, A.

    2007-01-01

    This work shows the design, setup and performance of a new single radon detector calibration chamber developed at the University of Huelva (Environmental Radioactivity Group). This system is based on a certified radon source and a traceable reference radon detector, which allows radon concentrations inside the chamber radon to be obtained in steady-state conditions within a range of 400-22 000 Bq m -3 with associated uncertainties in the range of 4%. In addition, the development of a new ad hoc calibration protocol (UHU-RC/01/06 'Rachel'), which is based on the modelling of radon concentration within the chamber, allows it to be used without the reference detector. To do that, a complete characterization and calibration of the different leakage constants and the flow meter reading have been performed. The accuracy and general performance of both working methods for the same chamber (i.e., with and without the reference detector) have been tested by means of their participation in an intercomparison exercise involving five active radon monitors

  13. Intercomparison of calibration procedures of high dose rate 192 Ir sources in Brazil and a proposal of a new methodology

    International Nuclear Information System (INIS)

    Marechal, M.H.; Almeida, C.E. de

    1998-01-01

    The objective of this paper is to report the results of an intercomparison of the calibration procedures for 192 Ir sources presently in use in Brazil and to proposal a calibration procedure to derive the N k for a Farmer type ionization chamber for 192 Ir energy by interpolating from a 60 Co gamma-rays and 250 kV x-rays calibration factors. the intercomparison results were all within ± 3.0 % except one case where 4.6 % was observed and latter identified as a problem with N-k value for X-rays. The method proposed by the present work make possible the improvement of the metrological coherence among the calibration laboratories and their users once the N k values could then provided by any of the members of SSDL network. (Author)

  14. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    KAUST Repository

    Zhao, Huaying

    2015-05-21

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  15. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    Directory of Open Access Journals (Sweden)

    Huaying Zhao

    Full Text Available Analytical ultracentrifugation (AUC is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188 S (4.4%. After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%. In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  16. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    KAUST Repository

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M.D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cö lfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Dí ez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José Garcí a; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Herná ndez; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mü cke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Piszczek, Grzegorz; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Swygert, Sarah G.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; null; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  17. The CryoSat Interferometer after 6 years in orbit: calibration and achievable performance

    Science.gov (United States)

    Scagliola, Michele; Fornari, Marco; De Bartolomei, Maurizio; Bouffard, Jerome; Parrinello, Tommaso

    2016-04-01

    The main payload of CryoSat is a Ku-band pulse width limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter). When commanded in SARIn (synthetic aperture radar interferometry) mode, through coherent along-track processing of the returns received from two antennas, the interferometric phase related to the first arrival of the echo is used to retrieve the angle of arrival of the scattering in the across-track direction. In fact, the across-track echo direction can be derived by exploiting the precise knowledge of the baseline vector (i.e. the vector between the two antennas centers of phase) and simple geometry. The end-to-end calibration strategy for the CryoSat interferometer consists on in-orbit calibration campaigns following the approach described in [1]. From the beginning of the CryoSat mission, about once a year the interferometer calibration campaigns have been periodically performed by rolling left and right the spacecraft of about ±0.4 deg. This abstract is aimed at presenting our analysis of the calibration parameters and of the achievable performance of the CryoSat interferometer over the 6 years of mission. Additionally, some further studies have been performed to assess the accuracy of the roll angle computed on ground as function of the aberration (the apparent displacement of a celestial object from its true position, caused by the relative motion of the observer and the object) correction applied to the attitude quaternions, provided by the Star Tracker mounted on-board. In fact, being the roll information crucial to obtain an accurate estimate of the angle of arrival, the data from interferometer calibration campaigns have been used to verify how the application of the aberration correction affects the roll information and, in turns, the measured angle of arrival. [1] Galin, N.; Wingham, D.J.; Cullen, R.; Fornari, M.; Smith, W.H.F.; Abdalla, S., "Calibration of the CryoSat-2 Interferometer and Measurement of Across

  18. Calibration and performance test of the Very-Front-End electronics for the CMS electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Blaha, J.

    2008-05-01

    A Very-Front-End (VFE) card is an important part of the on-detector read-out electronics of the CMS (Compact Muon Solenoid) electromagnetic calorimeter that is made of ∼ 76.000 radiation hard scintillating crystals PbWO 4 and operates on the Large Hadron Collider (LHC) at CERN. Almost 16.000 VFE cards that shape, amplify and digitize incoming signals from photodetectors generated by interacting particles. Since any maintenance of any part of the calorimeter is not possible during the 10-year lifetime of the experiment, the extensive screening program was employed throughout the whole manufacture process. As a part of readout electronics quality assurance program, the systems for burn-in and precise calibration of the VFE boards were developed and successfully used at IPN Lyon. In addition to functionality tests, all relevant electrical properties of each card were measured and analyzed in detail to obtain their full characterization and to build a database with all required parameters which will serve for the initial calibration of the whole calorimeter. In order to evaluate the calorimeter performance and also to deliver the most precise calibration constants, several fully equipped super-modules were extensively studied and calibrated during the test beam campaigns at CERN. As an important part of these tests, accurate studies of the electronics noise and relative gains, which are needed for measurement in high energy range, were carried out to optimize amplitude reconstruction procedure and thus improve the precision of the calorimeter energy determination. The heart of the thesis consists of the calibration of all VFE boards, including optimization of the laboratory calibration system and precise analysis of measured values to delivered desired calibration constants. The second half of the thesis is focused on the accurate evaluation and optimization of the read-out electronics in real data taking conditions. The results obtained in the laboratory at IPN Lyon

  19. Using automatic calibration method for optimizing the performance of Pedotransfer functions of saturated hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abdelbaki

    2016-06-01

    Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.

  20. Overview of a performance assessment methodology for low-level radioactive waste disposal facilities

    International Nuclear Information System (INIS)

    Kozak, M.W.; Chu, M.S.Y.

    1991-01-01

    A performance assessment methodology has been developed for use by the US Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. This paper provides a summary and an overview of the modeling approaches selected for the methodology. The overview includes discussions of the philosophy and structure of the methodology. This performance assessment methodology is designed to provide the NRC with a tool for performing confirmatory analyses in support of license reviews related to postclosure performance. The methodology allows analyses of dose to individuals from off-site releases under normal conditions as well as on-site doses to inadvertent intruders. 24 refs., 1 tab

  1. Lean methodology for performance improvement in the trauma discharge process.

    Science.gov (United States)

    O'Mara, Michael Shaymus; Ramaniuk, Aliaksandr; Graymire, Vickie; Rozzell, Monica; Martin, Stacey

    2014-07-01

    High-volume, complex services such as trauma and acute care surgery are at risk for inefficiency. Lean process improvement can reduce health care waste. Lean allows a structured look at processes not easily amenable to analysis. We applied lean methodology to the current state of communication and discharge planning on an urban trauma service, citing areas for improvement. A lean process mapping event was held. The process map was used to identify areas for immediate analysis and intervention-defining metrics for the stakeholders. After intervention, new performance was assessed by direct data evaluation. The process was completed with an analysis of effect and plans made for addressing future focus areas. The primary area of concern identified was interservice communication. Changes centering on a standardized morning report structure reduced the number of consult questions unanswered from 67% to 34% (p = 0.0021). Physical therapy rework was reduced from 35% to 19% (p = 0.016). Patients admitted to units not designated to the trauma service had 1.6 times longer stays (p miscommunication exists around patient education at discharge. Lean process improvement is a viable means of health care analysis. When applied to a trauma service with 4,000 admissions annually, lean identifies areas ripe for improvement. Our inefficiencies surrounded communication and patient localization. Strategies arising from the input of all stakeholders led to real solutions for communication through a face-to-face morning report and identified areas for ongoing improvement. This focuses resource use and identifies areas for improvement of throughput in care delivery.

  2. Calibration And Performance Verification Of LSC Packard 1900TR AFTER REPAIRING

    International Nuclear Information System (INIS)

    Satrio; Evarista-Ristin; Syafalni; Alip

    2003-01-01

    Calibration process and repeated verification of LSC Packard 1900TR at Hydrology Section-P3TlR has been done. In the period of middle 1997 to July 2000, the counting system of the instrument has damaged and repaired for several times. After repairing, the system was recalibrated and then verified. The calibration and verification were conducted by using standard 3 H, 14 C and background unquenched. The result of calibration shows that background count rates of 3 H and 14 C is 12.3 ± 0.79 cpm and 18.24 ± 0.69 cpm respectively; FOM 3 H and 14 C is 285.03 ± 15.95 and 641.06 ± 16.45 respectively; 3 H and 14 C efficiency is 59.13 ± 0.28 % and 95.09 ± 0.31 %. respectively. From the verification data's, the parameter of SIS and tSIE for 14 C is to be in range of limit. And then 3 H and 14 C efficiency is still above minimum limit. Whereas, the background fluctuation still show normal condition. It could be concluded that until now the performance of LSC Packard 1900TR is well condition and could be used for counting. (author)

  3. ACCESS, Absolute Color Calibration Experiment for Standard Stars: Integration, Test, and Ground Performance

    Science.gov (United States)

    Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul

    2018-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.

  4. Measuring Instruments Control Methodology Performance for Analog Electronics Remote Labs

    Directory of Open Access Journals (Sweden)

    Unai Hernandez-Jayo

    2012-12-01

    Full Text Available This paper presents the work that has been developed in parallel to the VISIR project. The objective of this paper is to present the results of the validations processes that have been carried out to check the control methodology. This method has been developed with the aim of being independent of the instruments of the labs.

  5. Development of a reference system and a methodology for the calibration of ophthalmic applicators utilized in brachytherapy

    International Nuclear Information System (INIS)

    Oliveira, Mercia Liane de

    2005-01-01

    90 Sr+ 90 Y beta radiation sources are widely utilized in brachytherapy, in the treatment of superficial lesions of eyes and skin. According to international recommendations, these applicators should be specified in terms of the absorbed dose rate to water at the reference point (1 mm from the source surface, along its axis of symmetry). Two mini-extrapolation chambers were developed with adequate geometrical characteristics for the dosimetry of plane and concave 90 Sr+ 90 Y sources. These chambers have 3.0 cm of outer diameter and 11.3 cm of length. Aluminized polyester foils are used as entrance windows, and the collecting electrodes were made of graphited polymethylmethacrylate. The mini-chambers were tested in 90 Sr+ 90 Y radiation beams from a beta check source and plane and concave ophthalmic applicators. All results obtained show the usefulness of these chambers as reference primary standards for the calibration of 90 Sr+ 90 Y applicators. The previous calibration of the mini-chambers in relation to a standard ionization chamber or to a standard beta source is unnecessary. The minichamber with plane window showed utility for low energy X-rays too. In order to establish an alternative method for the calibration of beta radiation sources, different thermoluminescent materials were tested: LiF, CaF 2 :Mn, CaF 2 :Dy and CaS0 4 :Dy. For their characterization, the response reproducibility, calibration curves, TL response as a function of the source-detector distance, transmission factors and the linearity of the sample response were determined. The calibration procedures of ophthalmic applicators were established utilizing the ionometric technique and thermoluminescence dosimetry. (author)

  6. The Principle of Pooled Calibrations and Outlier Retainment Elucidates Optimum Performance of Ion Chromatography

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Mikolajczak, Maria; Wojtachnio-Zawada, Katarzyna Olga

    A new principle of statistical data treatment is presented. Since the majority of scientists and costumers are interested in determination of the true amount of analyte in real samples, the focus of attention should be directed towards the concept of accuracy rather than precision. By exploiting...... that the principle of pooled calibrations provides a more realistic picture of the analytical performance with the drawback however, that generally higher levels of uncertainties should be accepted, as compared to contemporary literature values. The implications to the science of analytical chemistry in general...

  7. The principle of pooled calibrations and outlier retainment elucidates optimum performance of ion chromatography

    DEFF Research Database (Denmark)

    Andersen, Jens; Mikolajczak, Maria; Wojtachnio-Zawada, Katarzyna Olga

    2012-01-01

    A principle with quality assurance of ion chromatography (IC) is presented. Since the majority of scientists and costumers are interested in the determination of the true amount of analyte in real samples, the focus of attention should be directed towards the concept of accuracy rather than...... investigations of method validations where it was found that the principle of pooled calibrations provides a more realistic picture of the analytical performance with the drawback, however, that generally higher levels of uncertainties should be accepted, as compared to contemporary literature values...

  8. Calibration and performance of the MARK II drift chamber vertex detector

    International Nuclear Information System (INIS)

    Durrett, D.; Ford, W.T.; Hinshaw, D.A.; Rankin, P.; Smith, J.G.; Weber, P.

    1990-05-01

    We have calibrated and studied the performance of the MARK II drift chamber vertex detector with cosmic ray tracks collected with the chamber inside the MARK II detector at the SLC. The chamber achieves 30 μm impact parameter resolution and 500 μm track-pair resolution using CO 2 /C 2 H 6 H 6 (92/8) at 2 atmospheres pressure. The chamber has successfully recorded Z 0 decays at the SLC, and resolved tracks in dense hadronic jets with good efficiency and high accuracy. 5 refs., 13 figs

  9. Calibration of UFBC counters and their performance in the assay of large mass plutonium samples

    International Nuclear Information System (INIS)

    Verrecchia, G.P.D.; Smith, B.G.R.; Cranston, R.

    1991-01-01

    This paper reports on the cross-calibration of four Universal Fast Breeder reactor assembly coincidence (UFBC) counters using multi-can containers of Plutonium oxide powders with masses between 2 and 12 Kg of plutonium and a parametric study on the sensitivity of the detector response to the positioning or removal and substitution of the material with empty cans. The paper also reports on the performance of the UFBC for routine measurements on large mass, multi-can containers of plutonium oxide powders and compares the results to experience previously obtained in the measurement of fast reactor type fuel assemblies in the mass range 2 to 16 Kg of plutonium

  10. The Performance and Usability of a Factory-Calibrated Flash Glucose Monitoring System

    OpenAIRE

    Bailey, Timothy; Bode, Bruce W.; Christiansen, Mark P.; Klaff, Leslie J.; Alva, Shridhara

    2015-01-01

    Abstract Introduction: The purpose of the study was to evaluate the performance and usability of the FreeStyle? Libre? Flash glucose monitoring system (Abbott Diabetes Care, Alameda, CA) for interstitial glucose results compared with capillary blood glucose results. Materials and Methods: Seventy-two study participants with type 1 or type 2 diabetes were enrolled by four U.S. clinical sites. A sensor was inserted on the back of each upper arm for up to 14 days. Three factory-only calibrated s...

  11. CryoSat-2: Post launch performance of SIRAL-2 and its calibration/validation

    Science.gov (United States)

    Cullen, Robert; Francis, Richard; Davidson, Malcolm; Wingham, Duncan

    2010-05-01

    1. INTRODUCTION The main payload of CryoSat-2 [1], SIRAL (Synthetic interferometric radar altimeter), is a Ku band pulse-width limited radar altimeter which transmits pulses at a high pulse repetition frequency thus making received echoes phase coherent and suitable for azimuth processing [2]. The azimuth processing in conjunction with correction for slant range improves along track resolution to about 250 meters which is a significant improvement over traditional pulse-width limited systems such as Envisat RA-2, [3]. CryoSat-2 will be launched on 25th February 2010 and this paper describes the pre and post launch measures of CryoSat/SIRAL performance and the status of mission validation planning. 2. SIRAL PERFORMANCE: INTERNAL AND EXTERNAL CALIBRATION Phase coherent pulse-width limited radar altimeters such as SIRAL-2 pose a new challenge when considering a strategy for calibration. Along with the need to generate the well understood corrections for transfer function amplitude with respect to frequency, gain and instrument path delay there is also a need to provide corrections for transfer function phase with respect to frequency and AGC setting, phase variation across bursts of pulses. Furthermore, since some components of these radars are temperature sensitive one needs to be careful when the deciding how often calibrations are performed whilst not impacting mission performance. Several internal calibration ground processors have been developed to model imperfections within the CryoSat-2 radar altimeter (SIRAL-2) hardware and reduce their effect from the science data stream via the use of calibration correction auxiliary products within the ground segment. We present the methods and results used to model and remove imperfections and describe the baseline for usage of SIRAL-2 calibration modes during the commissioning phase and the operational exploitation phases of the mission. Additionally we present early results derived from external calibration of SIRAL via

  12. Description and performance of the OGSE for VNIR absolute spectroradiometric calibration of MTG-I satellites

    Science.gov (United States)

    Glastre, W.; Marque, J.; Compain, E.; Deep, A.; Durand, Y.; Aminou, D. M. A.

    2017-09-01

    respect to primary standards down to 3% (k=3) coupled with constraining environment (vacuum), large dynamic (up to factor 100), high spectral resolution of 3 nm. Another main difficulty is to adapt the specific MOTA etendue (300 mm pupil, 9 mrad field) to available primary standards. Each of these constraints were addressed by specific tool design and production, a fine optimization of the calibration procedure with a large involvement of metrology laboratories. This paper introduces the missions of MTG satellites and particularly of the FCI instrument. The requirements regarding the absolute calibration over the different spectrometric channels and the global strategy to fulfill them are described. The MOTA architecture and calibration strategy are then discussed and final expected results are presented, showing state of the art performances.

  13. Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance

    Science.gov (United States)

    Speck, Richard P.; Herz, Norman E., Jr.

    2000-06-01

    Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.

  14. Methodology to include a correction for offset in the calibration of a Diode-based 2D verification device; Metodologia para incluir una correccion por offset en la calibracion de un dispositivo de verificacion 2D basado en diodos

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez Ros, J. C.; Pamos Urena, M.; Jerez Sainz, M.; Lobato Munoz, M.; Jodar Lopez, C. A.; Ruiz Lopez, M. a.; Carrasco Rodriguez, J. L.

    2013-07-01

    We propose a methodology to correct doses of device verification 2D MapChek2 planes by offset. This methodology provides an array of correction by Offset applied to the calibration per dose due to the Offset of the diode Central as well as the correction of the Offset of each diode on each acquisition. (Author)

  15. Multispectral calibration to enhance the metrology performance of C-mount camera systems

    Directory of Open Access Journals (Sweden)

    S. Robson

    2014-06-01

    Full Text Available Low cost monochrome camera systems based on CMOS sensors and C-mount lenses have been successfully applied to a wide variety of metrology tasks. For high accuracy work such cameras are typically equipped with ring lights to image retro-reflective targets as high contrast image features. Whilst algorithms for target image measurement and lens modelling are highly advanced, including separate RGB channel lens distortion correction, target image circularity compensation and a wide variety of detection and centroiding approaches, less effort has been directed towards optimising physical target image quality by considering optical performance in narrow wavelength bands. This paper describes an initial investigation to assess the effect of wavelength on camera calibration parameters for two different camera bodies and the same ‘C-mount’ wide angle lens. Results demonstrate the expected strong influence on principal distance, radial and tangential distortion, and also highlight possible trends in principal point, orthogonality and affinity parameters which are close to the parameter estimation noise level from the strong convergent self-calibrating image networks.

  16. Surgical Treatment of Anal Stenosis with Diamond Flap Anoplasty Performed in a Calibrated Fashion.

    Science.gov (United States)

    Gülen, Merter; Leventoğlu, Sezai; Ege, Bahadir; Menteş, B Bülent

    2016-03-01

    Regarding anoplasty for anal stenosis, it is not clear to what extent the final anal caliber should be targeted. The aim of this study was to investigate the results of diamond-flap anoplasty performed in a calibrated manner for the treatment of severe anal stenosis due to a previous hemorrhoidectomy. Prospectively prepared standard forms were evaluated retrospectively. Anoplasty with unilateral or bilateral diamond flaps was performed for moderate or severe anal stenosis, targeting a final anal caliber of 25 to 26 mm. The demographic characteristics, causes of anal stenosis, number of previous surgeries, anal stenosis staging (Milsom and Mazier), anal calibers (millimeter), the Cleveland Clinic Incontinence Score, and the modified obstructed defecation syndrome Longo score were recorded on pre-prepared standard forms, as well as postoperative complications and the time of return to work. From January 2011 to July 2013, 18 patients (12 males, 67%) with a median age of 39 years (range, 27-70) were treated. All of the patients had a history of previous hemorrhoidectomy. The number of previous corrective interventions was 2.1 ± 1.8 (range, 0-4), and 2 patients had a history of failed anoplasty. Five patients (28%) had moderate anal stenosis and 13 (72%) had severe anal stenosis. Preoperative, intraoperative, and 12-month postoperative anal calibration values were 9 ± 3 mm (range, 5-15), 25 ± 0.75 mm (range, 24-26), and 25 ± 1 mm (range, 23-27) (p < 0.0001, for immediate postoperative and 12-month postoperative anal calibers compared with the intraoperative). Preoperative and 12-month postoperative Cleveland Clinic Incontinence Scores were 0.83 ± 1.15 (range, 0-4) and 0.39 ± 0.70 (range, 0-2) (p = 1.0). The clinical success rate was 88.9%. No severe postoperative complications were observed. This study was limited because it was a single-armed, retrospective analysis of prospectively designed data. Diamond-flap anoplasty performed in a standardized and calibrated

  17. Comparison of Performance between Genetic Algorithm and SCE-UA for Calibration of SCS-CN Surface Runoff Simulation

    OpenAIRE

    Jeon, Ji-Hong; Park, Chan-Gi; Engel, Bernard

    2014-01-01

    Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA) and shuffled complex evolution (SCE-UA) algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA) model, which employs the curve number (SCS-CN) method. The performance of the two optimization methods was compared by automatically calibrating L-THI...

  18. Biological dosimetry of ionizing radiation: Evaluation of the dose with cytogenetic methodologies by the construction of calibration curves

    Science.gov (United States)

    Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia

    2016-09-01

    In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining

  19. Calibration and Performance of the ATLAS Tile Calorimeter during the LHC Run 2

    CERN Document Server

    Faltova, Jana; The ATLAS collaboration

    2017-01-01

    The Tile Calorimeter (TileCal) covers the central part of the ATLAS experiment and provides important information for the reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling hadronic calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by charged particles in tiles is transmitted by wavelength-shifting fibres to photomultipliers, where it is converted to electric pulses and further processed by the on-detector electronics located in the outermost part of the calorimeter. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator based readout system. Combined information from all systems allows to monitor and equalize the calorimeter response at each stage of the signal production, from scintillation light to digitisation. The performance of the calorimeter is established with the large sample of the proton-proton collisions. Isolated hadrons a...

  20. Measurement of natural radioactivity: Calibration and performance of a high-resolution gamma spectrometry facility

    DEFF Research Database (Denmark)

    Murray, A. S.; Helsted, L. M.; Autzen, M.

    2018-01-01

    Murray et al. (2015) described an international inter-comparison of dose rate measurements undertaken using a homogenised beach ridge sand from Jutland, Denmark. The measured concentrations for 226Ra, 232Th and 40K from different laboratories varied considerably, with relative standard deviations...... of 26% (n=8), 59% (n=23) and 15% (n=23), respectively. In contrast, the relative standard deviations observed internally within our laboratory were 9%, 11% and 7%, respectively (n=20), and in addition our mean values were consistent with the global 40K mean, but significantly different from the 232Th...... mean. These problems in both accuracy and precision have led us to examine both the long term performance of our analytical facility, and its calibration. Our approach to the preparation of new absolute 238U, 232Th and 40K standards is outlined and tested against international standards. We also report...

  1. Calibration and Performance of the ATLAS Tile Calorimeter During the LHC Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221190; The ATLAS collaboration

    2017-01-01

    The Tile Calorimeter (TileCal) covers the central part of the ATLAS experiment and provides important information for the reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling hadronic calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by charged particles in tiles is transmitted by wavelength-shifting fibres to photomultipliers, where it is converted to electric pulses and further processed by the on-detector electronics located in the outermost part of the calorimeter. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator based readout system. Combined information from all systems allows to monitor and equalize the calorimeter response at each stage of the signal production, from scintillation light to digitisation. The performance of the calorimeter has been established with cosmic ray muons and the large sample of the proton-proton col...

  2. Estimation of block conductivities from hydrologically calibrated fracture networks. Description of methodology and application to Romuvaara investigation area

    International Nuclear Information System (INIS)

    Niemi, A.; Kontio, K.; Kuusela-Lahtinen, A.; Vaittinen, T.

    1999-03-01

    This study looks at heterogeneity in hydraulic conductivity at Romuvaara site. It concentrates on the average rock outside the deterministic fracture zones, especially in the deeper parts of the bedrock. A large number of stochastic fracture networks is generated based on geometrical data on fracture geometry from the site. The hydraulic properties of the fractures are determined by calibrating the networks against well test data. The calibration is done by starting from an initial estimate for fracture transmissivity distribution based on 2 m interval flow meter data, simulating the 10 m constant head injection test behaviour in a number of fracture network realisations and comparing the simulated well tests statistics to the measured ones. A large number of possible combinations of mean and standard deviation of fracture transmissivities are tested and the goodness-of-fit between the measured and simulated results determined by means of the bootstrapping method. As the result, a range of acceptable fracture transmissivity distribution parameters is obtained. In the accepted range, the mean of log transmissivity varies between -13.9 and -15.3 and standard deviation between 4.0 and 3.2, with increase in standard deviation compensating for decrease in mean. The effect of spatial autocorrelation was not simulated. The variogram analysis did, however, give indications that an autocorrelation range of the order of 10 m might be realistic for the present data. Based on the calibrated fracture networks, equivalent continuum conductivities of the calibrated 30 m x 30 m x 30 m conductivity blocks were determined. For each realisation, three sets of simulations was carried out with the main gradient in x, y and z directions, respectively. Based on these results the components of conductivity tensor were determined. Such data can be used e.g. for stochastic continuum type Monte Carlo simulations with larger scale models. The hydraulic conductivities in the direction of the

  3. Estimation of block conductivities from hydrologically calibrated fracture networks. Description of methodology and application to Romuvaara investigation area

    Energy Technology Data Exchange (ETDEWEB)

    Niemi, A [Royal Institute of Technology, Stockholm (Sweden); Kontio, K; Kuusela-Lahtinen, A; Vaittinen, T [VTT Communities and Infrastructure, Espoo (Finland)

    1999-03-01

    This study looks at heterogeneity in hydraulic conductivity at Romuvaara site. It concentrates on the average rock outside the deterministic fracture zones, especially in the deeper parts of the bedrock. A large number of stochastic fracture networks is generated based on geometrical data on fracture geometry from the site. The hydraulic properties of the fractures are determined by calibrating the networks against well test data. The calibration is done by starting from an initial estimate for fracture transmissivity distribution based on 2 m interval flow meter data, simulating the 10 m constant head injection test behaviour in a number of fracture network realisations and comparing the simulated well tests statistics to the measured ones. A large number of possible combinations of mean and standard deviation of fracture transmissivities are tested and the goodness-of-fit between the measured and simulated results determined by means of the bootstrapping method. As the result, a range of acceptable fracture transmissivity distribution parameters is obtained. In the accepted range, the mean of log transmissivity varies between -13.9 and -15.3 and standard deviation between 4.0 and 3.2, with increase in standard deviation compensating for decrease in mean. The effect of spatial autocorrelation was not simulated. The variogram analysis did, however, give indications that an autocorrelation range of the order of 10 m might be realistic for the present data. Based on the calibrated fracture networks, equivalent continuum conductivities of the calibrated 30 m x 30 m x 30 m conductivity blocks were determined. For each realisation, three sets of simulations was carried out with the main gradient in x, y and z directions, respectively. Based on these results the components of conductivity tensor were determined. Such data can be used e.g. for stochastic continuum type Monte Carlo simulations with larger scale models. The hydraulic conductivities in the direction of the

  4. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    Science.gov (United States)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  5. Synthesizing Soft Systems Methodology and Human Performance Technology

    Science.gov (United States)

    Scott, Glen; Winiecki, Donald J.

    2012-01-01

    Human performance technology (HPT), like other concepts, models, and frameworks that we use to describe the world in which we live and the way we organize ourselves to accomplish valuable activities, is built from paradigms that were fresh and relevant at the time it was conceived and from the fields of study from which it grew. However, when the…

  6. A Methodology for Making Early Comparative Architecture Performance Evaluations

    Science.gov (United States)

    Doyle, Gerald S.

    2010-01-01

    Complex and expensive systems' development suffers from a lack of method for making good system-architecture-selection decisions early in the development process. Failure to make a good system-architecture-selection decision increases the risk that a development effort will not meet cost, performance and schedule goals. This research provides a…

  7. Mutual fund performance: A synthesis of taxonomic and methodological issues

    Directory of Open Access Journals (Sweden)

    S.G. Badrinath

    2010-12-01

    Full Text Available This paper provides a comprehensive taxonomy of mutual funds and discusses the relative importance of these fund types. While most academic research focuses on US equity funds, we provide results for many more asset classes with this taxonomy—fixed income, balanced, global, International, sector, market-neutral and long-short funds. For each, we start by reporting statistics on the number of funds and their total net asset values at different intervals over the last four decades. We then identify short and long-term patterns in annual returns to mutual funds. We study the cross-sectional and time-series properties of the distribution of investor flows into different types of mutual funds, describe the relationship between flows and performance and discuss its implications for the strategic behaviour of managers and investors. We estimate and interpret fund performance alphas using both the single-factor and four-factor Fama-French models for each taxonomy type. Finally we describe the state of academic research on portfolio performance evaluation tilted towards an applied audience.

  8. Methodologies for Measuring Judicial Performance: The Problem of Bias

    Directory of Open Access Journals (Sweden)

    Jennifer Elek

    2014-12-01

    Full Text Available Concerns about gender and racial bias in the survey-based evaluations of judicial performance common in the United States have persisted for decades. Consistent with a large body of basic research in the psychological sciences, recent studies confirm that the results from these JPE surveys are systematically biased against women and minority judges. In this paper, we explain the insidious manner in which performance evaluations may be biased, describe some techniques that may help to reduce expressions of bias in judicial performance evaluation surveys, and discuss the potential problem such biases may pose in other common methods of performance evaluation used in the United States and elsewhere. We conclude by highlighting the potential adverse consequences of judicial performance evaluation programs that rely on biased measurements. Durante décadas ha habido una preocupación por la discriminación por género y racial en las evaluaciones del rendimiento judicial basadas en encuestas, comunes en Estados Unidos. De acuerdo con un gran corpus de investigación básica en las ciencias psicológicas, estudios recientes confirman que los resultados de estas encuestas de evaluación del rendimiento judicial están sistemáticamente sesgados contra las mujeres y los jueces de minorías. En este artículo se explica la manera insidiosa en que las evaluaciones de rendimiento pueden estar sesgadas, se describen algunas técnicas que pueden ayudar a reducir las expresiones de sesgo en los estudios de evaluación del rendimiento judicial, y se debate el problema potencial que estos sesgos pueden plantear en otros métodos comunes de evaluación del rendimiento utilizados en Estados Unidos y otros países. Se concluye destacando las posibles consecuencias adversas de los programas de evaluación del rendimiento judicial que se basan en mediciones sesgadas. DOWNLOAD THIS PAPER FROM SSRN: http://ssrn.com/abstract=2533937

  9. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization.

    Science.gov (United States)

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2015-05-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper.

  10. Body composition in Nepalese children using isotope dilution: the production of ethnic-specific calibration equations and an exploration of methodological issues.

    Science.gov (United States)

    Devakumar, Delan; Grijalva-Eternod, Carlos S; Roberts, Sebastian; Chaube, Shiva Shankar; Saville, Naomi M; Manandhar, Dharma S; Costello, Anthony; Osrin, David; Wells, Jonathan C K

    2015-01-01

    Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA) is a simple and accurate method for estimating body composition, but requires population-specific calibration equations. Objectives. (1) To generate population specific calibration equations to predict lean mass (LM) from BIA in Nepalese children aged 7-9 years. (2) To explore methodological changes that may extend the range and improve accuracy. Methods. BIA measurements were obtained from 102 Nepalese children (52 girls) using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced. Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R (2) 93%). The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42). Reducing the number of children increased the error at the tails of the weight distribution. Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.

  11. Body composition in Nepalese children using isotope dilution: the production of ethnic-specific calibration equations and an exploration of methodological issues

    Directory of Open Access Journals (Sweden)

    Delan Devakumar

    2015-03-01

    Full Text Available Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA is a simple and accurate method for estimating body composition, but requires population-specific calibration equations.Objectives. (1 To generate population specific calibration equations to predict lean mass (LM from BIA in Nepalese children aged 7–9 years. (2 To explore methodological changes that may extend the range and improve accuracy.Methods. BIA measurements were obtained from 102 Nepalese children (52 girls using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced.Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R2 93%. The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42. Reducing the number of children increased the error at the tails of the weight distribution.Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.

  12. Performance specification methodology: introduction and application to displays

    Science.gov (United States)

    Hopper, Darrel G.

    1998-09-01

    Acquisition reform is based on the notion that DoD must rely on the commercial marketplace insofar as possible rather than solely looking inward to a military marketplace to meet its needs. This reform forces a fundamental change in the way DoD conducts business, including a heavy reliance on private sector models of change. The key to more reliance on the commercial marketplace is the performance specifications (PS). This paper introduces some PS concepts and a PS classification principal to help bring some structure to the analysis of risk (cost, schedule, capability) in weapons system development and the management of opportunities for affordable ownership (maintain/increase capability via technology insertion, reduce cost) in this new paradigm. The DoD shift toward commercial components is nowhere better exemplified than in displays. Displays are the quintessential dual-use technology and are used herein to exemplify these PS concepts and principal. The advent of flat panel displays as a successful technology is setting off an epochal shift in cockpits and other military applications. Displays are installed in every DoD weapon system, and are, thus, representative of a range of technologies where issues and concerns throughout industry and government have been raised regarding the increased DoD reliance on the commercial marketplace. Performance specifications require metrics: the overall metrics of 'information-thrust' with units of Mb/s and 'specific info- thrust' with units of Mb/s/kg are introduced to analyze value of a display to the warfighter and affordability to the taxpayer.

  13. Photovoltaic Device Performance Evaluation Using an Open-Hardware System and Standard Calibrated Laboratory Instruments

    Directory of Open Access Journals (Sweden)

    Jesús Montes-Romero

    2017-11-01

    Full Text Available This article describes a complete characterization system for photovoltaic devices designed to acquire the current-voltage curve and to process the obtained data. The proposed system can be replicated for educational or research purposes without having wide knowledge about electronic engineering. Using standard calibrated instrumentation, commonly available in any laboratory, the accuracy of measurements is ensured. A capacitive load is used to bias the device due to its versatility and simplicity. The system includes a common part and an interchangeable part that must be designed depending on the electrical characteristics of each PV device. Control software, developed in LabVIEW, controls the equipment, performs automatic campaigns of measurements, and performs additional calculations in real time. These include different procedures to extrapolate the measurements to standard test conditions and methods to obtain the intrinsic parameters of the single diode model. A deep analysis of the uncertainty of measurement is also provided. Finally, the proposed system is validated by comparing the results obtained from some commercial photovoltaic modules to the measurements given by an independently accredited laboratory.

  14. Calibrating the imaging and therapy performance of magneto-fluorescent gold nanoshells for breast cancer

    Science.gov (United States)

    Dowell, Adam; Chen, Wenxue; Biswal, Nrusingh; Ayala-Orozco, Ciceron; Giuliano, Mario; Schiff, Rachel; Halas, Naomi J.; Joshi, Amit

    2012-03-01

    Gold nanoshells with NIR plasmon resonance can be modified to simultaneously enhance conjugated NIR fluorescence dyes and T2 contrast of embedded iron-oxide nanoparticles, and molecularly targeted to breast and other cancers. We calibrated the theranostic performance of magneto-fluorescent nanoshells, and contrasted the performance of molecularly targeted and untargeted nanoshells for breast cancer therapy, employing MCF-7L and their HER2 overexpressing derivative MCF-7/HER2-18 breast cancer cells as in vitro model systems. Silica core gold nanoshells with plasmon resonance on ~810 nm were doped with NIR dye ICG and ~10 nm iron-oxide nanoparticles in a ~20 nm epilayer of silica. A subset of nanoshells was conjugated to antibodies targeting HER2. Cell viability with varying laser power levels in presence and absence of bare and HER2-targeted nanoshells was assessed by calcein and propidium iodide staining. For MCF-7L cells, increasing power resulted in increased cell death (F=5.63, p=0.0018), and bare nanoshells caused more cell death than HER2-targeted nanoshells or laser treatment alone (F=30.13, pmagneto-fluorescent nanocomplexes for imaging and therapy of breast cancer cells, and the advantages of targeting receptors unique to cancer cells.

  15. Development of CANDU ECCS performance evaluation methodology and guides

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Kwang Hyun; Park, Kyung Soo; Chu, Won Ho [Korea Maritime Univ., Jinhae (Korea, Republic of)

    2003-03-15

    The objectives of the present work are to carry out technical evaluation and review of CANDU safety analysis methods in order to assist development of performance evaluation methods and review guides for CANDU ECCS. The applicability of PWR ECCS analysis models are examined and it suggests that unique data or models for CANDU are required for the following phenomena: break characteristics and flow, frictional pressure drop, post-CHF heat transfer correlations, core flow distribution during blowdown, containment pressure, and reflux rate. For safety analysis of CANDU, conservative analysis or best estimate analysis can be used. The main advantage of BE analysis is a more realistic prediction of margins to acceptance criteria. The expectation is that margins demonstrated with BE methods would be larger that when a conservative approach is applied. Some outstanding safety analysis issues can be resolved by demonstration that accident consequences are more benign than previously predicted. Success criteria for analysis and review of Large LOCA can be developed by top-down approach. The highest-level success criteria can be extracted from C-6 and from them, the lower level criteria can be developed step-by-step, in a logical fashion. The overall objectives for analysis and review are to verify radiological consequences and frequency are met.

  16. A Performance-Based Technology Assessment Methodology to Support DoD Acquisition

    National Research Council Canada - National Science Library

    Mahafza, Sherry; Componation, Paul; Tippett, Donald

    2005-01-01

    .... This methodology is referred to as Technology Performance Risk Index (TPRI). The TPRI can track technology readiness through a life cycle, or it can be used at a specific time to support a particular system milestone decision...

  17. Performance, Calibration and Stability of the Mars InSight Mission Pressure Sensor

    Science.gov (United States)

    Banfield, Don; Banerdt, Bruce; Hurst, Ken; Grinblat, Jonny; murray, alex; Carpenter, Scott

    2017-10-01

    The NASA Mars InSight Discovery Mission is primarily aimed at understanding the seismic environment at Mars and in turn the interior structure of the planet. To this end, it carries a set of very sensitive seismometers to characterize fine ground movements from quakes, impacts and tides. However, to remove atmospheric perturbations that would otherwise corrupt the seismic signals, InSight also carries a pressure sensor of unprecedented sensitivity and frequency response for a Mars mission.The instrument is based on a commercial spacecraft pressure sensor built by the Tavis Corporation. Tavis heritage transducers have provided pressure measurements on several interplanetary missions, starting with a similar application on the Viking Landers. The sensor developed for the Insight mission is their most sensitive device. That same sensitivity was the root of the challenges faced in the design and development for Insight. It uses inductive sensing of a deformable membrane, and includes an internal temperature sensor to compensate for temperature effects in its overall response.The technical requirement on the pressure sensor performance is 0.01(f/0.1)^(-2/3) Pa/sqrt(Hz) between 0.01 and 0.1 Hz, and 0.01 Pa/sqrt(Hz) between 0.1 and 1 Hz. The actual noise spectrum is about 0.01(f/0.3)^(-2/3) Pa/sqrt(Hz) between 0.01 and 1 Hz, and its frequency response (including inlet plumbing) has good response up to about 10 Hz Nyquist (it will be sampled at 20 Hz).Achieving the required sensitivity proved to be a difficult engineering challenge, which necessitated extensive experimentation and prototyping of the electronics design. In addition, a late discovery of the introduction of noise by the signal processing chain into the measurement stream forced a last-minute change in the instrument’s firmware.The flight unit has been calibrated twice, separated by a time span of about 2 years due to the delay in launching the InSight mission. This has the benefit of allowing a direct

  18. Methodology of the Integrated Analysis of Company's Financial Status and Its Performance Results

    OpenAIRE

    Mackevičius, Jonas; Valkauskas, Romualdas

    2010-01-01

    Information about company's financial status and its performance results is very important for the objective evaluation of company's position in the market and competitive possibilities in the future. Such information is provided in the financial statement. It is important to apply and investigate this information properly. The methodology of company's financial status and performance results integrated analysis is recommended in this article. This methodology consists of these three elements...

  19. Methodology for calibration of detector of Nal (TI) 3 'X 3' for measurements in vivo of patients with hyperthyroidism undergoing radioiodine therapy; Metodologia para calibracao de detector de Nal(TI) 3'X3' para medicoes in vivo em pacientes portadores de hipertireoidismo submetidos a radioiodoterapia

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Carlaine B.; Lacerda, Isabelle V.B.; Oliveira, Mercia L.; Hazin, Clovis A.; Lima, Fabiana F., E-mail: carlaine.carvalho@gmail.com, E-mail: bellelacerda@hotmail.com, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br, E-mail: fflima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-Ne/CNEN-PE), Recife, PE (Brazil)

    2013-11-01

    The aim of this study is to establish the methodology for calibration of the detection system to be used in determining the therapeutic activity of {sup 131}I required to release the desired absorbed dose in the thyroid. This step is critical to the development of a protocol for individualized doses. The system consists of a detector of NaI (Tl ) 3 'x3' coupled to Genie 2000 software. The calibration sources of {sup 60}CO, {sup 137}Cs and {sup 133}Ba were used. Obtained straight calibration system, with {sup 60}CO and {sup 137}Cs sources. Subsequently, the detector was calibrated using a simulator -neck thyroid designed and produced by the IRD/CNEN with known standard solution containing 18.7 kBq {sup 133}Ba activity (in 12/09/24) evenly distributed. He was also calibrated with other thyroid - neck phantom Model 3108 manufactured by Searle Radigraphics Ind., containing a net source of {sup 131}I ( 7.7 MBq ). Five measurements of five minutes were realized for three different distances detector simulator, and the respective calculated calibration factors was performed to three. The values of the calibration factors found for the simulator manufactured by IRD and the Searle Radigraphics Ind. for the distances 20 , 25 and 30cm were 0.35, 0.24, 0.18, and 0.15, 0.11, 0.09, respectively. With the detection system properly calibrated and the calibration factors established, the technique is suitable for the evaluation of diagnostic activity of {sup 131}I incorporated by hyperthyroidism.

  20. Methodology for calibration of detector of NaI (TI)) 3 ' X 3 ' for in vivo measurements of patients with hyperthyroidism undergoing to radioiodotherapy; Metodologia para calibracao de detector de NaI(TI) ) 3'X3' para medicoes in vivo em pacientes portadores de hipertireoidismo submetidos a radioiodoterapia

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Carlaine B.; Lacerda, Isabelle V.B.; Oliveira, Mercia L.; Hazin, Clovis A., E-mail: carlaine.carvalho@gmail.com, E-mail: bellelacerda@hotmail.com, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Fabiana F., E-mail: fflima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-10-01

    The aim of this study is to establish the methodology for calibration of the detection system to be used in determining the therapeutic activity of {sup 131}I required to release desired absorbed dose in the thyroid gland . This step is critical to the development of a protocol for individualized doses. The system consists of a detector of NaI (Tl ) 3'x3' coupled to software Genie 2000. We used the calibration sources of {sup 60}Co , {sup 137}Cs and {sup 133}Ba. We obtained the straight calibration system, with sources {sup 60}Co and {sup 137}Cs. Subsequently , the detector was calibrated using a thyroid phantom-neck designed and produced by the IRD / CNEN with known activity of {sup 133}Ba standard solution containing 18.7 kBq (on 09/24/12) evenly distributed. He was also calibrated with other thyroid- neck phantom model 3108 manufactured by Searle Radigraphics Ind., containing a liquid source of {sup 131}I ( 7.7 MBq ). Five measurements were performed during 5 minutes for three different distances detector-simulator and calculated the corresponding calibration factors . The values of the calibration factors found for the simulator made by IRD and Searle Radigraphics Ind. for the distances 20, 25 and 30 cm were 0.35 , 0.24, 0.18, 0.15 , 0.11, 0, 09 , respectively. With the detection system properly calibrated and the calibration factors established, the technique is suitable for the evaluation of diagnostic activities of {sup 131}I incorporated by hyperthyroid patients. (author)

  1. Calibration and performance testing of the IAEA Aquila Active Well Coincidence Counter (Unit 1)

    International Nuclear Information System (INIS)

    Menlove, H.O..; Siebelist, R.; Wenz, T.R.

    1996-01-01

    An Active Well Coincidence Counter (AWCC) and a portable shift register (PSR-B) produced by Aquila Technologies Group, Inc., have been tested and cross-calibrated with existing AWCCs used by the International Atomic Energy Agency (IAEA). This report summarizes the results of these tests and the cross-calibration of the detector. In addition, updated tables summarizing the cross-calibration of existing AWCCs and AmLi sources are also included. Using the Aquila PSR-B with existing IAEA software requires secondary software also supplied by Aquila to set up the PSR-B with the appropriate measurement parameters

  2. Performance evaluation of the reference system for calibration of IPEN activities

    International Nuclear Information System (INIS)

    Martins, Elaine Wirney; Potiens, Maria da Penha A.

    2011-01-01

    The formation of good quality image in nuclear medicine services depends on several factors, including the radiopharmaceutical activity, which must be well determined by a specific apparatus, in perfect operating condition, called activimeter. Therefore, the establishment of a quality control program for measuring the radiopharmaceuticals radioactivity before being administered to the patient is crucial to the safe and effective use of the radiopharmaceuticals used in diagnostic and therapeutical procedures. Two activimeters, belonging by the Laboratorio de Calibracao do Instrumentos (LCI) of Instituto de Pesquisas Energeticas e Nucleares (IPEN), were evaluated: the secondary standard system NPL-CRC radionuclide calibrator, manufactured by Southern Scientific, and the work standard system CRC-15BT with traceability to National Institute of Standard and Technology (NIST). A set of standard sources of the radionuclides 1 '3 3 Ba, 57 Co and 137 Cs was used as reference to perform the quality control tests. The precision, accuracy and repeatability tests were in agreement with those established by the CNEN-NE 3.05 Brazilian standard, which recommends the variation limit of up to 10%, 5% and 5% respectively. The results obtained for the reproducibility tests presented a variation always below 1,5%, which means it is within the IEC 61674 international standard recommendation of up to 3%. Activimeters response regarding linearity showed concordance between the measured activities and the theoretical curve for both activimeters. (author)

  3. Performance evaluation of the reference system for calibration of IPEN activities

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Elaine Wirney; Potiens, Maria da Penha A., E-mail: ewmartins@ipen.br, E-mail: mppalbu@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The formation of good quality image in nuclear medicine services depends on several factors, including the radiopharmaceutical activity, which must be well determined by a specific apparatus, in perfect operating condition, called activimeter. Therefore, the establishment of a quality control program for measuring the radiopharmaceuticals radioactivity before being administered to the patient is crucial to the safe and effective use of the radiopharmaceuticals used in diagnostic and therapeutical procedures. Two activimeters, belonging by the Laboratorio de Calibracao do Instrumentos (LCI) of Instituto de Pesquisas Energeticas e Nucleares (IPEN), were evaluated: the secondary standard system NPL-CRC radionuclide calibrator, manufactured by Southern Scientific, and the work standard system CRC-15BT with traceability to National Institute of Standard and Technology (NIST). A set of standard sources of the radionuclides {sup 1}'3{sup 3}Ba, {sup 57}Co and {sup 137}Cs was used as reference to perform the quality control tests. The precision, accuracy and repeatability tests were in agreement with those established by the CNEN-NE 3.05 Brazilian standard, which recommends the variation limit of up to 10%, 5% and 5% respectively. The results obtained for the reproducibility tests presented a variation always below 1,5%, which means it is within the IEC 61674 international standard recommendation of up to 3%. Activimeters response regarding linearity showed concordance between the measured activities and the theoretical curve for both activimeters. (author)

  4. A performance assessment methodology for low-level radioactive waste disposal

    International Nuclear Information System (INIS)

    Derring, L.R.

    1990-01-01

    To demonstrate compliance with the performance objectives governing protection of the general population in 10 CFR 61.41, applicants for land disposal of low-level radioactive waste are required to conduct a pathways analysis, or quantitative evaluation of radionuclide release, transport through environmental media, and dose to man. The Nuclear Regulatory Commission staff defined a strategy and initiated a project at Sandia National Laboratories to develop a methodology for independently evaluating an applicant's analysis of postclosure performance. This performance assessment methodology was developed in five stages: identification of environmental pathways, ranking the significance of the pathways, identification and integration of models for pathway analyses, identification and selection of computer codes and techniques for the methodology, and implementation of the codes and documentation of the methodology. This paper summarizes the NRC approach for conducting evaluations of license applications for low-level radioactive waste facilities. 23 refs

  5. Performance of the air2stream model that relates air and stream water temperatures depends on the calibration method

    Science.gov (United States)

    Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.

    2018-06-01

    A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.

  6. MoDOT pavement preservation research program volume VII, re-calibration of triggers and performance models.

    Science.gov (United States)

    2015-10-01

    The objective of this task is to develop the concept and framework for a procedure to routinely create, re-calibrate, and update the : Trigger Tables and Performance Models. The scope of work for Task 6 includes a limited review of the recent pavemen...

  7. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction.

    Science.gov (United States)

    Park, Seong Ho; Han, Kyunghwa

    2018-03-01

    The use of artificial intelligence in medicine is currently an issue of great interest, especially with regard to the diagnostic or predictive analysis of medical images. Adoption of an artificial intelligence tool in clinical practice requires careful confirmation of its clinical utility. Herein, the authors explain key methodology points involved in a clinical evaluation of artificial intelligence technology for use in medicine, especially high-dimensional or overparameterized diagnostic or predictive models in which artificial deep neural networks are used, mainly from the standpoints of clinical epidemiology and biostatistics. First, statistical methods for assessing the discrimination and calibration performances of a diagnostic or predictive model are summarized. Next, the effects of disease manifestation spectrum and disease prevalence on the performance results are explained, followed by a discussion of the difference between evaluating the performance with use of internal and external datasets, the importance of using an adequate external dataset obtained from a well-defined clinical cohort to avoid overestimating the clinical performance as a result of overfitting in high-dimensional or overparameterized classification model and spectrum bias, and the essentials for achieving a more robust clinical evaluation. Finally, the authors review the role of clinical trials and observational outcome studies for ultimate clinical verification of diagnostic or predictive artificial intelligence tools through patient outcomes, beyond performance metrics, and how to design such studies. © RSNA, 2018.

  8. Pre-Launch Calibration and Performance Study of the Polarcube 3u Temperature Sounding Radiometer Mission

    Science.gov (United States)

    Periasamy, L.; Gasiewski, A. J.; Sanders, B. T.; Rouw, C.; Alvarenga, G.; Gallaher, D. W.

    2016-12-01

    The positive impact of passive microwave observations of tropospheric temperature, water vapor and surface variables on short-term weather forecasts has been clearly demonstrated in recent forecast anomaly growth studies. The development of a fleet of such passive microwave sensors especially at V-band and higher frequencies in low earth orbit using 3U and 6U CubeSats could help accomplish the aforementioned objectives at low system cost and risk as well as provide for regularly updated radiometer technology. The University of Colorado's 3U CubeSat, PolarCube is intended to serve as a demonstrator for such a fleet of passive sounders and imagers. PolarCube supports MiniRad, an eight channel, double sideband 118.7503 GHz passive microwave sounder. The mission is focused primarily on sounding in Arctic and Antarctic regions with the following key remote sensing science and engineering objectives: (i) Collect coincident tropospheric temperature profiles above sea ice, open polar ocean, and partially open areas to develop joint sea ice concentration and lower tropospheric temperature mapping capabilities in clear and cloudy atmospheric conditions. This goal will be accomplished in conjunction with data from existing passive microwave sensors operating at complementary bands; and (ii) Assess the capabilities of small passive microwave satellite sensors for environmental monitoring in support of the future development of inexpensive Earth science missions. Performance data of the payload/spacecraft from pre-launch calibration will be presented. This will include- (i) characterization of the antenna sub-system comprising of an offset 3D printed feedhorn and spinning parabolic reflector and impact of the antenna efficiencies on radiometer performance, (ii) characterization of MiniRad's RF front-end and IF back-end with respect to temperature fluctuations and their impact on atmospheric temperature weighting functions and receiver sensitivity, (iii) results from roof

  9. Design and calibration of a test facility for MLI thermal performance measurements below 80K

    International Nuclear Information System (INIS)

    Boroski, W.; Kunzelman, R.; Ruschman, M.; Schoo, C.

    1992-04-01

    The design geometry of the SSC dipole cryostat includes active thermal radiation shields operating at 80K and 20K respectively. Extensive measurements conducted in a Heat Leak Test Facility (HLTF) have been used to evaluate the thermal performance of candidate multilayer insulation (MLI) systems for the 80K thermal shield, with the present system design based upon those measurement results. With the 80K MLI geometry established, efforts have focused on measuring the performance of MLI systems near 20K. A redesign of the HLTF has produced a measurement facility capable of conducting measurements with the warm boundary fixed at 80K and the cold boundary variable from 10K to 50K. Removing the 80K shield permits measurements with a warm boundary at 300K. The 80K boundary consists of a copper shield thermally anchored to a liquid nitrogen reservoir. The cold boundary consists of a copper anchor plate whose temperature is varied through boil-off gas from a 500 liter helium supply dewar. A transfer line heat exchanger supplies the boil-off gas to the anchor plate at a constant and controlled rate. The gas, which serves as cooling gas, is routed through a copper cooling tube soldered into the anchor plate. Varying the cooling gas flow rate varies the amount of refrigeration supplied to the anchor plate, thereby determining the plate temperature. A resistance heater installed on the anchor plate is regulated by a cryogenic temperature controller to provide final temperature control. Heat leak values are measured using a heatmeter which senses heat flow as a temperature gradient across a fixed thermal impedance. Since the thermal conductivity of the thermal impedance changes with temperature, the heatmeter is calibrated at key cold boundary temperatures. Thus, the system is capable of obtaining measurement data under a variety of system conditions. 7 refs

  10. M&A Performance and Economic Impact: Integration and Critical Assessment of Methodological Approach

    Directory of Open Access Journals (Sweden)

    Karolis Andriuskevicius

    2017-11-01

    Full Text Available Purpose of the article: Existing methodologies employed within the M&A performance framework are investigated and critically discuss. Methodology/methods: The research has been carried out as a structured assessment of past literature. The findings from scientific articles and studies by various scholars have been categorized, grouped and summarized to discern a meta-analytic view of the work carried out to date. Scientific aim: The conducted research seeks to ascertain and evaluate theoretically existing methodologies used in empirical studies that would allow proper and critical understanding of the results of various findings in the holistic and global M&As area. Findings: The research elaborates on several key developments in M&A methodology and performance studies carried out in empirical works during the last two decades. The findings help to independently and objectively assess performance of M&A from a holistic perspective. Conclusions: Each methodology measuring either M&A performance on a corporate level or effects of M&A on the economy level shall be interpreted and relied on with caution as each of them dispose their limitations whereas application of these methodologies is subject to data availability and case specific.

  11. A performance assessment methodology for high-level radioactive waste disposal in unsaturated, fractured tuff

    International Nuclear Information System (INIS)

    Gallegos, D.P.

    1991-07-01

    Sandia National Laboratories, has developed a methodology for performance assessment of deep geologic disposal of high-level nuclear waste. The applicability of this performance assessment methodology has been demonstrated for disposal in bedded salt and basalt; it has since been modified for assessment of repositories in unsaturated, fractured tuff. Changes to the methodology are primarily in the form of new or modified ground water flow and radionuclide transport codes. A new computer code, DCM3D, has been developed to model three-dimensional ground-water flow in unsaturated, fractured rock using a dual-continuum approach. The NEFTRAN 2 code has been developed to efficiently model radionuclide transport in time-dependent velocity fields, has the ability to use externally calculated pore velocities and saturations, and includes the effect of saturation dependent retardation factors. In order to use these codes together in performance-assessment-type analyses, code-coupler programs were developed to translate DCM3D output into NEFTRAN 2 input. Other portions of the performance assessment methodology were evaluated as part of modifying the methodology for tuff. The scenario methodology developed under the bedded salt program has been applied to tuff. An investigation of the applicability of uncertainty and sensitivity analysis techniques to non-linear models indicate that Monte Carlo simulation remains the most robust technique for these analyses. No changes have been recommended for the dose and health effects models, nor the biosphere transport models. 52 refs., 1 fig

  12. Comparison of Performance between Genetic Algorithm and SCE-UA for Calibration of SCS-CN Surface Runoff Simulation

    Directory of Open Access Journals (Sweden)

    Ji-Hong Jeon

    2014-11-01

    Full Text Available Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA and shuffled complex evolution (SCE-UA algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA model, which employs the curve number (SCS-CN method. The performance of the two optimization methods was compared by automatically calibrating L-THIA for monthly runoff from 10 watersheds in Indiana. The selected watershed areas ranged from 32.7 to 5844.1 km2. The SCS-CN values and total five-day rainfall for adjustment were optimized, and the objective function used was the Nash-Sutcliffe value (NS value. The GA method rapidly reached the optimal space until the 10th generating population (generation, and after the 10th generation solutions increased dispersion around the optimal space, called a cross hair pattern, because of mutation rate increase. The number of looping executions influenced the performance of model calibration for the SCE-UA and GA method. The GA method performed better for the case of fewer loop executions than the SCE-UA method. For most watersheds, calibration performance using GA was better than for SCE-UA until the 50th generation when the number of model loop executions was around 5150 (one generation has 100 individuals. However, after the 50th generation of the GA method, the SCE-UA method performed better for calibrating monthly runoff compared to the GA method. Optimized SCS-CN values for primary land use types were nearly the same for the two methods, but those for minor land use types and total five-day rainfall for AMC adjustment were somewhat different because those parameters did not significantly influence calculation of the objective function. The GA method is recommended for cases when model simulation takes a long time and the model user does not have sufficient time

  13. The Performance and Usability of a Factory-Calibrated Flash Glucose Monitoring System.

    Science.gov (United States)

    Bailey, Timothy; Bode, Bruce W; Christiansen, Mark P; Klaff, Leslie J; Alva, Shridhara

    2015-11-01

    The purpose of the study was to evaluate the performance and usability of the FreeStyle(®) Libre™ Flash glucose monitoring system (Abbott Diabetes Care, Alameda, CA) for interstitial glucose results compared with capillary blood glucose results. Seventy-two study participants with type 1 or type 2 diabetes were enrolled by four U.S. clinical sites. A sensor was inserted on the back of each upper arm for up to 14 days. Three factory-only calibrated sensor lots were used in the study. Sensor glucose measurements were compared with capillary blood glucose (BG) results (approximately eight per day) obtained using the BG meter built into the reader (BG reference) and with the YSI analyzer (Yellow Springs Instrument, Yellow Springs, OH) reference tests at three clinic visits (32 samples per visit). Sensor readings were masked to the participants. The accuracy of the results was demonstrated against capillary BG reference values, with 86.7% of sensor results within Consensus Error Grid Zone A. The percentage of readings within Consensus Error Grid Zone A on Days 2, 7, and 14 was 88.4%, 89.2%, and 85.2%, respectively. The overall mean absolute relative difference was 11.4%. The mean lag time between sensor and YSI reference values was 4.5±4.8 min. Sensor accuracy was not affected by factors such as body mass index, age, type of diabetes, clinical site, insulin administration, or hemoglobin A1c. Interstitial glucose measurements with the FreeStyle Libre system were found to be accurate compared with capillary BG reference values, with accuracy remaining stable over 14 days of wear and unaffected by patient characteristics.

  14. A Methodology for Equitable Performance Assessment and Presentation of Wave Energy Converters Based on Sea Trials

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Pecher, Arthur; Margheritini, Lucia

    2013-01-01

    This paper provides a methodology for the analysis and presentation of data obtained from sea trials of wave energy converters (WEC). The equitable aspect of this methodology lies in its wide application, as any WEC at any scale or stage of development can be considered as long as the tests are p...... parameters influence the performance of the WEC can also be investigated using this methodology.......This paper provides a methodology for the analysis and presentation of data obtained from sea trials of wave energy converters (WEC). The equitable aspect of this methodology lies in its wide application, as any WEC at any scale or stage of development can be considered as long as the tests...... leads to testing campaigns that are not as extensive as desired. Therefore, the performance analysis should be robust enough to allow for not fully complete sea trials and sub optimal performance data. In other words, this methodology is focused at retrieving the maximum amount of useful information out...

  15. The Effect of Soft Skills and Training Methodology on Employee Performance

    Science.gov (United States)

    Ibrahim, Rosli; Boerhannoeddin, Ali; Bakare, Kazeem Kayode

    2017-01-01

    Purpose: The purpose of this paper is to investigate the effect of soft skill acquisition and the training methodology adopted on employee work performance. In this study, the authors study the trends of research in training and work performance in organisations that focus on the acquisition of technical or "hard skills" for employee…

  16. Strategy for reduced calibration sets to develop quantitative structure-retention relationships in high-performance liquid chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Andries, Jan P.M. [University of Professional Education, Department of Life Sciences, P.O. Box 90116, 4800 RA Breda (Netherlands); Claessens, Henk A. [University of Professional Education, Department of Life Sciences, P.O. Box 90116, 4800 RA Breda (Netherlands); Eindhoven University of Technology, Department of Chemical Engineering and Chemistry, Laboratory of Polymer Chemistry, P.O. Box 513 (Helix, STW 1.35), 5600 MB Eindhoven (Netherlands); Heyden, Yvan Vander [Department of Analytical Chemistry and Pharmaceutical Technology, Vrije Universiteit Brussel-VUB, Laarbeeklaan 103, B-1090 Brussels (Belgium); Buydens, Lutgarde M.C., E-mail: L.Buydens@science.ru.nl [Institute for Molecules and Materials, Radboud University Nijmegen, Toernooiveld 1, 6525 ED Nijmegen (Netherlands)

    2009-10-12

    In high-performance liquid chromatography, quantitative structure-retention relationships (QSRRs) are applied to model the relation between chromatographic retention and quantities derived from molecular structure of analytes. Classically a substantial number of test analytes is used to build QSRR models. This makes their application laborious and time consuming. In this work a strategy is presented to build QSRR models based on selected reduced calibration sets. The analytes in the reduced calibration sets are selected from larger sets of analytes by applying the algorithm of Kennard and Stone on the molecular descriptors used in the QSRR concerned. The strategy was applied on three QSRR models of different complexity, relating logk{sub w} or log k with either: (i) log P, the n-octanol-water partition coefficient, (ii) calculated quantum chemical indices (QCI), or (iii) descriptors from the linear solvation energy relationship (LSER). Models were developed and validated for 76 reversed-phase high-performance liquid chromatography systems. From the results we can conclude that it is possible to develop log P models suitable for the future prediction of retentions with as few as seven analytes. For the QCI and LSER models we derived the rule that three selected analytes per descriptor are sufficient. Both the dependent variable space, formed by the retention values, and the independent variable space, formed by the descriptors, are covered well by the reduced calibration sets. Finally guidelines to construct small calibration sets are formulated.

  17. An integrated methodology for the dynamic performance and reliability evaluation of fault-tolerant systems

    International Nuclear Information System (INIS)

    Dominguez-Garcia, Alejandro D.; Kassakian, John G.; Schindall, Joel E.; Zinchuk, Jeffrey J.

    2008-01-01

    We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft

  18. A performance assessment methodology for low-level radioactive waste disposal

    International Nuclear Information System (INIS)

    Deering, L.R.; Kozak, M.W.

    1990-01-01

    To demonstrate compliance with the performance objectives governing protection of the general population in 10 CFR 61.41, applicants for land disposal of low-level radioactive waste are required to conduct a pathways analysis, or quantitative evaluation of radionuclide release, transport through environmental media, and dose to man. The Nuclear Regulatory Commission staff defined a strategy and initiated a project at Sandia National Laboratories to develop a methodology for independently evaluating an applicant's analysis of postclosure performance. This performance assessment methodology was developed in five stages: (1) identification of environmental pathways, (2) ranking, the significance of the pathways, (3) identification and integration of models for pathway analyses, (4) identification and selection of computer codes and techniques for the methodology, and (5) implementation of the codes and documentation of the methodology. The final methodology implements analytical and simple numerical solutions for source term, ground-water flow and transport, surface water transport, air transport, food chain, and dosimetry analyses, as well as more complex numerical solutions for multidimensional or transient analyses when more detailed assessments are needed. The capability to perform both simple and complex analyses is accomplished through modular modeling, which permits substitution of various models and codes to analyze system components

  19. Performance and Model Calibration of R-D-N Processes in Pilot Plant

    DEFF Research Database (Denmark)

    de la Sota, A.; Larrea, L.; Novak, L.

    1994-01-01

    This paper deals with the first part of an experimental programme in a pilot plant configured for advanced biological nutrient removal processes treating domestic wastewater of Bilbao. The IAWPRC Model No.1 was calibrated in order to optimize the design of the full-scale plant. In this first phas...

  20. TRACEABILITY OF PRECISION MEASUREMENTS ON COORDINATE MEASURING MACHINES – TRACEABILITY, CALIBRATION AND PERFORMANCE VERIFICATION

    DEFF Research Database (Denmark)

    Bariani, Paolo; De Chiffre, Leonardo; Tosello, Guido

    This document is used in connection with an exercise of 1 hour duration as a part of the course VISION ONLINE – One week course on Precision & Nanometrology. The exercise concerns establishment of traceability of measurements with optical coordinate machine by mean of using two different calibrated...

  1. A methodology to quantify the aerobic and anaerobic sludge digestion performance for nutrient recycling in aquaponics

    Directory of Open Access Journals (Sweden)

    Delaide, B.

    2018-01-01

    Full Text Available Description of the subject. This research note presents a methodology to quantify the tilapia sludge digestion performance in aerobic and anaerobic reactors for aquaponic purpose. Both organic reduction and macro- and microelements mineralization performances were addressed. Objectives. To set up an appropriate methodology to quantify sludge digestion performance in aquaponics. To describe the methodology and illustrate it with some results as example. Method. Equations were adapted to quantify (1 the organic reduction performance in terms of chemical oxygen demand (COD and total suspended solids (TSS reduction, and (2 the nutrient recycling performance in terms of macro- and microelements mineralization. Results. The equations were applied to data obtained from experimental aerobic and anaerobic reactors as example. Reactors were able to remove at least 50% of the TSS and COD input. The nutrient mineralization was consistent with a 10 — 60% range for all macro- and micronutrients. Conclusions. The methodology provides explicit indicators on the sludge treatment performances for aquaponics. Treating aquaponic sludge onsite is promising to avoid sludge spillage, improve nutrient recycling and save water.

  2. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    Science.gov (United States)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  3. Accuracy of subcutaneous continuous glucose monitoring in critically ill adults: improved sensor performance with enhanced calibrations.

    Science.gov (United States)

    Leelarathna, Lalantha; English, Shane W; Thabit, Hood; Caldwell, Karen; Allen, Janet M; Kumareswaran, Kavita; Wilinska, Malgorzata E; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L; Burnstein, Rowan; Hovorka, Roman

    2014-02-01

    Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle(®) Navigator(®) (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m(2); mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6-19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1-6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. In total, 1,060 CGM-ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122-213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non-critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements.

  4. HST/WFC3: new capabilities, improved IR detector calibrations, and long-term performance stability

    Science.gov (United States)

    MacKenty, John W.; Baggett, Sylvia M.; Brammer, Gabriel; Hilbert, Bryan; Long, Knox S.; McCullough, Peter; Riess, Adam G.

    2014-08-01

    Wide Field Camera 3 (WFC3) is the most used instrument on board the Hubble Space Telescope. Providing a broad range of high quality imaging capabilities from 200 to 1700mn using Silicon CCD and HgCdTe IR detectors, WFC3 is fulfilling both our expectations and its formal requirements. With the re-establishment of the observatory level "spatial scan" capability, we have extended the scientific potential ofWFC3 in multiple directions. These controlled scans, often in combination with low resolution slit-less spectroscopy, enable extremely high precision differential photometric measurements of transiting exo-planets and direct measurement of sources considerably brighter than originally anticipated. In addition, long scans permit the measurement of the separation of star images to accuracies approaching 25 micro-arc seconds (a factor of 10 better than prior FGS or imaging measurements) enables direct parallax observations out to 4 kilo-parsecs. In addition, we have employed this spatial scan capability to both assess and improve the mid­ spatial frequency flat field calibrations. WFC3 uses a Teledyne HgCdTe 1014xl014 pixel Hawaii-lR infrared detector array developed for this mission. One aspect of this detector with implications for many types of science observations is the localized trapping of charge. This manifests itself as both image persistence lasting several hours and as an apparent response variation with photon arrival rate over a large dynamic range. Beyond a generally adopted observing strategy of obtaining multiple observations with small spatial offsets, we have developed a multi-parameter model that accounts for source flux, accumulated signal level, and decay time to predict image persistence at the pixel level. Using a running window through the entirety of the acquired data, we now provide observers with predictions for each individual exposure within several days of its acquisition. Ongoing characterization of the sources on infrared background and

  5. Impact of missing attenuation and scatter corrections on 99m Tc-MAA SPECT 3D dosimetry for liver radioembolization using the patient relative calibration methodology: A retrospective investigation on clinical images.

    Science.gov (United States)

    Botta, Francesca; Ferrari, Mahila; Chiesa, Carlo; Vitali, Sara; Guerriero, Francesco; Nile, Maria Chiara De; Mira, Marta; Lorenzon, Leda; Pacilio, Massimiliano; Cremonesi, Marta

    2018-04-01

    To investigate the clinical implication of performing pre-treatment dosimetry for 90 Y-microspheres liver radioembolization on 99m Tc-MAA SPECT images reconstructed without attenuation or scatter correction and quantified with the patient relative calibration methodology. Twenty-five patients treated with SIR-Spheres ® at Istituto Europeo di Oncologia and 31 patients treated with TheraSphere ® at Istituto Nazionale Tumori were considered. For each acquired 99m Tc-MAA SPECT, four reconstructions were performed: with attenuation and scatter correction (AC_SC), only attenuation (AC_NoSC), only scatter (NoAC_SC) and without corrections (NoAC_NoSC). Absorbed dose maps were calculated from the activity maps, quantified applying the patient relative calibration to the SPECT images. Whole Liver (WL) and Tumor (T) regions were drawn on CT images. Injected Liver (IL) region was defined including the voxels receiving absorbed dose >3.8 Gy/GBq. Whole Healthy Liver (WHL) and Healthy Injected Liver (HIL) regions were obtained as WHL = WL - T and HIL = IL - T. Average absorbed dose to WHL and HIL were calculated, and the injection activity was derived following each Institute's procedure. The values obtained from AC_NoSC, NoAC_SC and NoAC_NoSC images were compared to the reference value suggested by AC_SC images using Bland-Altman analysis and Wilcoxon paired test (5% significance threshold). Absorbed-dose maps were compared to the reference map (AC_SC) in global terms using the Voxel Normalized Mean Square Error (%VNMSE), and at voxel level by calculating for each voxel the normalized difference with the reference value. The uncertainty affecting absorbed dose at voxel level was accounted for in the comparison; to this purpose, the voxel counts fluctuation due to Poisson and reconstruction noise was estimated from SPECT images of a water phantom acquired and reconstructed as patient images. NoAC_SC images lead to activity prescriptions not significantly different from the

  6. Calibration and field performance of membrane-enclosed sorptive coating for integrative passive sampling of persistent organic pollutants in water

    International Nuclear Information System (INIS)

    Vrana, Branislav; Paschke, Albrecht; Popp, Peter

    2006-01-01

    Membrane-enclosed sorptive coating (MESCO) is a miniaturised monitoring device that enables integrative passive sampling of persistent, hydrophobic organic pollutants in water. The system combines the passive sampling with solventless preconcentration of organic pollutants from water and subsequent desorption of analytes on-line into a chromatographic system. Exchange kinetics of chemicals between water and MESCO was studied at different flow rates of water, in order to characterize the effect of variable environmental conditions on the sampler performance, and to identify a method for in situ correction of the laboratory-derived calibration data. It was found that the desorption of chemicals from MESCO into water is isotropic to the absorption of the analytes onto the sampler under the same exposure conditions. This allows for the in situ calibration of the uptake of pollutants using elimination kinetics of performance reference compounds and more accurate estimates of target analyte concentrations. A field study was conducted to test the sampler performance alongside spot sampling. A good agreement of contaminant patterns and water concentrations was obtained by the two sampling techniques. - A robust calibration method of a passive sampling device for monitoring of persistent organic pollutants in water is described

  7. SQUID (superconducting quantum interference device) arrays for simultaneous magnetic measurements: Calibration and source localization performance

    Science.gov (United States)

    Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.

    1988-02-01

    Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.

  8. Performance-based methodology for assessing seismic vulnerability and capacity of buildings

    Science.gov (United States)

    Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li

    2010-06-01

    This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.

  9. Performance assessment methodology (PAM) for low level radioactive waste (LLRW) disposal facilities

    International Nuclear Information System (INIS)

    Selander, W.N.

    1992-01-01

    An overview is given for Performance Assessment Methodology (PAM) for Low Level Radioactive Waste (LLRW) disposal technologies, as required for licensing and safety studies. This is a multi-disciplinary activity, emphasizing applied mathematics, mass transfer, geohydrology and radiotoxicity effects on humans. (author). 2 refs

  10. Evaluating electronic performance support systems: A methodology focused on future use-in-practice

    NARCIS (Netherlands)

    Collis, Betty; Verwijs, C.A.

    1995-01-01

    Electronic performance support systems, as an emerging type of software environment, present many new challenges in relation to effective evaluation. In this paper, a global approach to a 'usage-orientated' evaluation methodology for software product is presented, followed by a specific example of

  11. Rethinking Fragile Landscapes during the Greek Crisis: Precarious Aesthetics and Methodologies in Athenian Dance Performances

    Science.gov (United States)

    Zervou, Natalie

    2017-01-01

    The financial crisis in Greece brought about significant changes in the sociopolitical and financial landscape of the country. Severe budget cuts imposed on the arts and performing practices have given rise to a new aesthetic which has impacted the themes and methodologies of contemporary productions. To unpack this aesthetic, I explore the ways…

  12. Using a False Biofeedback Methodology to Explore Relationships between Learners' Affect, Metacognition, and Performance

    Science.gov (United States)

    Strain, Amber Chauncey; Azevedo, Roger; D'Mello, Sidney K.

    2013-01-01

    We used a false-biofeedback methodology to manipulate physiological arousal in order to induce affective states that would influence learners' metacognitive judgments and learning performance. False-biofeedback is a method used to induce physiological arousal (and resultant affective states) by presenting learners with audio stimuli of false heart…

  13. Design methodology for flexible energy conversion systems accounting for dynamic performance

    DEFF Research Database (Denmark)

    Pierobon, Leonardo; Casati, Emiliano; Casella, Francesco

    2014-01-01

    This article presents a methodology to help in the definition of the optimal design of power generation systems. The innovative element is the integration of requirements on dynamic performance into the system design procedure. Operational flexibility is an increasingly important specification...

  14. "Found Performance": Towards a Musical Methodology for Exploring the Aesthetics of Care.

    Science.gov (United States)

    Wood, Stuart

    2017-09-18

    Concepts of performance in fine art reflect key processes in music therapy. Music therapy enables practitioners to reframe patients as performers, producing new meanings around the clinical knowledge attached to medical histories and constructs. In this paper, music therapy practices are considered in the wider context of art history, with reference to allied theories from social research. Tracing a century in art that has revised the performativity of found objects (starting with Duchamp's "Fountain"), and of found sound (crystallised by Cage's 4' 33) this paper proposes that music therapy might be a pioneer methodology of "found performance". Examples from music therapy and contemporary socially engaged art practices are brought as potential links between artistic methodologies and medical humanities research, with specific reference to notions of Aesthetics of Care.

  15. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh

    2016-09-08

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus is on methodologies appropriate for the characterization, at the molecular level, of the morphology in blend systems consisting of an electron donor and electron acceptor, of importance for understanding the performance properties of bulk-heterojunction organic solar cells. The protocol is formulated as an introductory manual for investigators who aim to study the bulk-heterojunction morphology in molecular details, thereby facilitating the development of structure morphology property relationships when used in tandem with experimental results.

  16. Assessment of an isolation condenser performance in case of a LOHS using the RMPS+ Methodology

    International Nuclear Information System (INIS)

    Giménez, M; Mezio, F.; Zanocco, P.; Lorenzo, G.

    2011-01-01

    Conclusions: • It has been observed that in the original RMPS proposal the response surface may be poorly built, and therefore the system reliability. • The methodology was improved by means of an iterative process in order to build a response surface with new performance indicator values in the boundary failure domain, obtained trough the plant model. • The proposed methodology was useful: – To identify efficiently “odd events”; – To rank the influence of the parameters with uncertainties; – To estimate CAREM-like PRHRS “functional reliability” to verify a design criterion

  17. Integrated cost estimation methodology to support high-performance building design

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, Prasad; Greden, Lara; Eijadi, David; McDougall, Tom [The Weidt Group, Minnetonka (United States); Cole, Ray [Axiom Engineers, Monterey (United States)

    2007-07-01

    Design teams evaluating the performance of energy conservation measures (ECMs) calculate energy savings rigorously with established modelling protocols, accounting for the interaction between various measures. However, incremental cost calculations do not have a similar rigor. Often there is no recognition of cost reductions with integrated design, nor is there assessment of cost interactions amongst measures. This lack of rigor feeds the notion that high-performance buildings cost more, creating a barrier for design teams pursuing aggressive high-performance outcomes. This study proposes an alternative integrated methodology to arrive at a lower perceived incremental cost for improved energy performance. The methodology is based on the use of energy simulations as means towards integrated design and cost estimation. Various points along the spectrum of integration are identified and characterized by the amount of design effort invested, the scheduling of effort, and relative energy performance of the resultant design. It includes a study of the interactions between building system parameters as they relate to capital costs. Several cost interactions amongst energy measures are found to be significant.The value of this approach is demonstrated with alternatives in a case study that shows the differences between perceived costs for energy measures along various points on the integration spectrum. These alternatives show design tradeoffs and identify how decisions would have been different with a standard costing approach. Areas of further research to make the methodology more robust are identified. Policy measures to encourage the integrated approach and reduce the barriers towards improved energy performance are discussed.

  18. Calibration of CORSIM models under saturated traffic flow conditions.

    Science.gov (United States)

    2013-09-01

    This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....

  19. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    Science.gov (United States)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that

  20. Performance of Different Light Sources for the Absolute Calibration of Radiation Thermometers

    Science.gov (United States)

    Martín, M. J.; Mantilla, J. M.; del Campo, D.; Hernanz, M. L.; Pons, A.; Campos, J.

    2017-09-01

    The evolving mise en pratique for the definition of the kelvin (MeP-K) [1, 2] will, in its forthcoming edition, encourage the realization and dissemination of the thermodynamic temperature either directly (primary thermometry) or indirectly (relative primary thermometry) via fixed points with assigned reference thermodynamic temperatures. In the last years, the Centro Español de Metrología (CEM), in collaboration with the Instituto de Óptica of Consejo Superior de Investigaciones Científicas (IO-CSIC), has developed several setups for absolute calibration of standard radiation thermometers using the radiance method to allow CEM the direct dissemination of the thermodynamic temperature and the assignment of the thermodynamic temperatures to several fixed points. Different calibration facilities based on a monochromator and/or a laser and an integrating sphere have been developed to calibrate CEM's standard radiation thermometers (KE-LP2 and KE-LP4) and filter radiometer (FIRA2). This system is based on the one described in [3] placed in IO-CSIC. Different light sources have been tried and tested for measuring absolute spectral radiance responsivity: a Xe-Hg 500 W lamp, a supercontinuum laser NKT SuperK-EXR20 and a diode laser emitting at 6473 nm with a typical maximum power of 120 mW. Their advantages and disadvantages have been studied such as sensitivity to interferences generated by the laser inside the filter, flux stability generated by the radiant sources and so forth. This paper describes the setups used, the uncertainty budgets and the results obtained for the absolute temperatures of Cu, Co-C, Pt-C and Re-C fixed points, measured with the three thermometers with central wavelengths around 650 nm.

  1. Design, Performance, and Calibration of the CMS Hadron-Outer Calorimeter

    CERN Document Server

    Abdullin, Salavat; Acharya, Bannaje Sripathi; Adam, Nadia; Adams, Mark Raymond; Akchurin, Nural; Akgun, Ugur; Albayrak, Elif Asli; Anderson, E Walter; Antchev, Georgy; Arcidy, M; Ayan, S; Aydin, Sezgin; Aziz, Tariq; Baarmand, Marc M; Babich, Kanstantsin; Baden, Drew; Bakirci, Mustafa Numan; Banerjee, Sunanda; Banerjee, Sudeshna; Bard, Robert; Barnes, Virgil E; Bawa, Harinder Singh; Baiatian, G; Bencze, Gyorgy; Beri, Suman Bala; Berntzon, Lisa; Bhatnagar, Vipin; Bhatti, Anwar; Bodek, Arie; Bose, Suvadeep; Bose, Tulika; Budd, Howard; Burchesky, Kyle; Camporesi, Tiziano; Cankocak, Kerem; Carrell, Kenneth Wayne; Cerci, Salim; Chendvankar, Sanjay; Chung, Yeon Sei; Clarida, Warren; Cremaldi, Lucien Marcus; Cushman, Priscilla; Damgov, Jordan; De Barbaro, Pawel; Debbins, Paul; Deliomeroglu, Mehmet; Demianov, A; de Visser, Theo; Deshpande, Pandurang Vishnu; Díaz, Jonathan; Dimitrov, Lubomir; Dugad, Shashikant; Dumanoglu, Isa; Duru, Firdevs; Efthymiopoulos, I; Elias, John E; Elvira, D; Emeliantchik, Igor; Eno, Sarah Catherine; Ershov, Alexander; Erturk, Sefa; Esen, Selda; Eskut, Eda; Fenyvesi, Andras; Fisher, Wade Cameron; Freeman, Jim; Ganguli, Som N; Gaultney, Vanessa; Gamsizkan, Halil; Gavrilov, Vladimir; Genchev, Vladimir; Gleyzer, Sergei V; Golutvin, Igor; Goncharov, Petr; Grassi, Tullio; Green, Dan; Gribushin, Andrey; Grinev, B; Gurtu, Atul; Murat Güler, A; Gülmez, Erhan; Gümüs, K; Haelen, T; Hagopian, Sharon; Hagopian, Vasken; Halyo, Valerie; Hashemi, Majid; Hauptman, John M; Hazen, Eric; Heering, Arjan Hendrix; Heister, Arno; Hunt, Adam; Ilyina, N; Ingram, D; Isiksal, Engin; Jarvis, Chad; Jeong, Chiyoung; Johnson, Kurtis F; Jones, John; Kaftanov, Vitali; Kalagin, Vladimir; Kalinin, Alexey; Kalmani, Suresh Devendrappa; Karmgard, Daniel John; Kaur, Manjit; Kaya, Mithat; Kaya, Ozlem; Kayis-Topaksu, A; Kellogg, Richard G; Khmelnikov, Alexander; Kim, Heejong; Kisselevich, I; Kodolova, Olga; Kohli, Jatinder Mohan; Kolossov, V; Korablev, Andrey; Korneev, Yury; Kosarev, Ivan; Kramer, Laird; Krinitsyn, Alexander; Krishnaswamy, Marthi Ramaswamy; Krokhotin, Andrey; Kryshkin, V; Kuleshov, Sergey; Kumar, Arun; Kunori, Shuichi; Laasanen, Alvin T; Ladygin, Vladimir; Laird, Edward; Landsberg, Greg; Laszlo, Andras; Lawlor, C; Lazic, Dragoslav; Lee, Sang Joon; Levchuk, Leonid; Linn, Stephan; Litvintsev, Dmitri; Lobolo, L; Los, Serguei; Lubinsky, V; Lukanin, Vladimir; Ma, Yousi; Machado, Emanuel; Maity, Manas; Majumder, Gobinda; Mans, Jeremy; Marlow, Daniel; Markowitz, Pete; Martínez, German; Mazumdar, Kajari; Merlo, Jean-Pierre; Mermerkaya, Hamit; Mescheryakov, G; Mestvirishvili, Alexi; Miller, Michael; Möller, A; Mohammadi-Najafabadi, M; Moissenz, P; Mondal, Naba Kumar; Mossolov, Vladimir; Nagaraj, P; Narasimham, Vemuri Syamala; Norbeck, Edwin; Olson, Jonathan; Onel, Yasar; Onengüt, G; Ozkan, Cigdem; Ozkurt, Halil; Ozkorucuklu, Suat; Ozok, Ferhat; Paktinat, S; Pal, Andras; Patil, Mandakini Ravindra; Penzo, Aldo; Petrushanko, Sergey; Petrosian, A; Pikalov, Vladimir; Piperov, Stefan; Podrasky, V; Polatoz, A; Pompos, Arnold; Popescu, Sorina; Posch, C; Pozdnyakov, Andrey; Qian, Weiming; Ralich, Robert; Reddy, L; Reidy, Jim; Rogalev, Evgueni; Roh, Youn; Rohlf, James; Ronzhin, Anatoly; Ruchti, Randy; Ryazanov, Anton; Safronov, Grigory; Sanders, David A; Sanzeni, Christopher; Sarycheva, Ludmila; Satyanarayana, B; Schmidt, Ianos; Sekmen, Sezen; Semenov, Sergey; Senchishin, V; Sergeyev, S; Serin, Meltem; Sever, Ramazan; Singh, B; Singh, Jas Bir; Sirunyan, Albert M; Skuja, Andris; Sharma, Seema; Sherwood, Brian; Shumeiko, Nikolai; Smirnov, Vitaly; Sogut, Kenan; Sonmez, Nasuf; Sorokin, Pavel; Spezziga, Mario; Stefanovich, R; Stolin, Viatcheslav; Sudhakar, Katta; Sulak, Lawrence; Suzuki, Ichiro; Talov, Vladimir; Teplov, Konstantin; Thomas, Ray; Tonwar, Suresh C; Topakli, Huseyin; Tully, Christopher; Turchanovich, L; Ulyanov, A; Vanini, A; Vankov, Ivan; Vardanyan, Irina; Varela, F; Vergili, Mehmet; Verma, Piyush; Vesztergombi, Gyorgy; Vidal, Richard; Vishnevskiy, Alexander; Vlassov, E; Vodopiyanov, Igor; Volobouev, Igor; Volkov, Alexey; Volodko, Anton; Wang, Lei; Werner, Jeremy Scott; Wetstein, Matthew; Winn, Dave; Wigmans, Richard; Whitmore, Juliana; Wu, Shouxiang; Yazgan, Efe; Yetkin, Taylan; Zálán, Peter; Zarubin, Anatoli; Zeyrek, Mehmet

    2008-01-01

    The CMS hadron calorimeter is a sampling calorimeter with brass absorber and plastic scintillator tiles with wavelength shifting fibres for carrying the light to the readout device. The barrel hadron calorimeter is complemented with an outer calorimeter to ensure high energy shower containment in the calorimeter. Fabrication, testing and calibration of the outer hadron calorimeter are carried out keeping in mind its importance in the energy measurement of jets in view of linearity and resolution. It will provide a net improvement in missing $\\et$ measurements at LHC energies. The outer hadron calorimeter will also be used for the muon trigger in coincidence with other muon chambers in CMS.

  2. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    DEFF Research Database (Denmark)

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish...... coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time...

  3. Skeletal keratan sulfate chain molecular weight calibration by high-performance gel-permeation chromatography

    International Nuclear Information System (INIS)

    Dickenson, J.M.; Morris, H.G.; Nieduszynski, I.A.; Huckerby, T.N.

    1990-01-01

    A method has been developed for the molecular sizing of skeletal keratan sulfate chains using an HPLC gel-permeation chromatography system. Keratan sulfate chains and keratanase-derived oligosaccharides were prepared from the nucleus pulposus of bovine intervertebral disc (6-year-old animals). A Bio-Gel TSK 30 XL column eluted in 0.2 M NaCl and at 30 degrees C was calibrated with keratan sulfate oligosaccharides of known size as well as 3H-end-labeled keratan sulfate chains to yield the relationship

  4. Low-level waste disposal site performance assessment with the RQ/PQ methodology. Final report

    International Nuclear Information System (INIS)

    Rogers, V.C.; Grant, M.W.; Sutherland, A.A.

    1982-12-01

    A methodology called RQ/PQ (retention quotient/performance quotient) has been developed for relating the potential hazard of radioactive waste to the natural and man-made barriers provided by a disposal facility. The methodology utilizes a systems approach to quantify the safety of low-level waste disposed in a near-surface facility. The main advantages of the RQ/PQ methodology are its simplicity of analysis and clarity of presentation while still allowing a comprehensive set of nuclides and pathways to be treated. Site performance and facility designs for low-level waste disposal can be easily investigated with relatively few parameters needed to define the problem. Application of the methodology has revealed that the key factor affecting the safety of low-level waste disposal in near surface facilities is the potential for intrusion events. Food, inhalation and well water pathways dominate in the analysis of such events. While the food and inhalation pathways are not strongly site-dependent, the well water pathway is. Finally, burial at depths of 5 m or more was shown to reduce the impacts from intrusion events

  5. Performance of the radionuclide calibrators used at Division of Radiopharmaceuticals Production of the CRCN-NE, Recife, PE, Brazil

    International Nuclear Information System (INIS)

    Fragoso, Maria Conceicao de Farias; Albuquerque, Antonio Morais de Sa; Oliveira, Mercia L.; Lima, Fernando Roberto de Andrade

    2011-01-01

    The radionuclide calibrators are essential instruments in nuclear medicine services to determine the activity of radiopharmaceuticals which will be administered to the patients. Essentially, it consists of a well-type ionization chamber coupled to a special displaying electronic circuit which allows one visualize the instrument response in activity units. Inappropriate performance of these equipment may lead to underestimation or overestimation of the activity, compromising the success of diagnosis or therapy. Quality control describes the procedures by which one can assure quality of activity measurement, providing efficacy of nuclear medicine procedures that employ unsealed sources of radioactivity. Several guides of national and international organizations summarize the recommended tests for the quality control of the radionuclide calibrators: accuracy, precision, reproducibility, linearity and geometry. The aim of this work was to establish a quality control program on the radionuclide calibrators from Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE), Brazil, utilized as reference instruments on comparison of activities measurements.. The results were determined and compared to the references values recommended by national and international guides. Besides, the geometry test provided the correction factors to be applied in activity measurements in different containers, in different volumes and in different positions. (author)

  6. Accuracy, calibration and clinical performance of the EuroSCORE: can we reduce the number of variables?

    Science.gov (United States)

    Ranucci, Marco; Castelvecchio, Serenella; Menicanti, Lorenzo; Frigiola, Alessandro; Pelissero, Gabriele

    2010-03-01

    The European system for cardiac operative risk evaluation (EuroSCORE) is currently used in many institutions and is considered a reference tool in many countries. We hypothesised that too many variables were included in the EuroSCORE using limited patient series. We tested different models using a limited number of variables. A total of 11150 adult patients undergoing cardiac operations at our institution (2001-2007) were retrospectively analysed. The 17 risk factors composing the EuroSCORE were separately analysed and ranked for accuracy of prediction of hospital mortality. Seventeen models were created by progressively including one factor at a time. The models were compared for accuracy with a receiver operating characteristics (ROC) analysis and area under the curve (AUC) evaluation. Calibration was tested with Hosmer-Lemeshow statistics. Clinical performance was assessed by comparing the predicted with the observed mortality rates. The best accuracy (AUC 0.76) was obtained using a model including only age, left ventricular ejection fraction, serum creatinine, emergency operation and non-isolated coronary operation. The EuroSCORE AUC (0.75) was not significantly different. Calibration and clinical performance were better in the five-factor model than in the EuroSCORE. Only in high-risk patients were 12 factors needed to achieve a good performance. Including many factors in multivariable logistic models increases the risk for overfitting, multicollinearity and human error. A five-factor model offers the same level of accuracy but demonstrated better calibration and clinical performance. Models with a limited number of factors may work better than complex models when applied to a limited number of patients. Copyright (c) 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  7. On self-propagating methodological flaws in performance normalization for strength and power sports.

    Science.gov (United States)

    Arandjelović, Ognjen

    2013-06-01

    Performance in strength and power sports is greatly affected by a variety of anthropometric factors. The goal of performance normalization is to factor out the effects of confounding factors and compute a canonical (normalized) performance measure from the observed absolute performance. Performance normalization is applied in the ranking of elite athletes, as well as in the early stages of youth talent selection. Consequently, it is crucial that the process is principled and fair. The corpus of previous work on this topic, which is significant, is uniform in the methodology adopted. Performance normalization is universally reduced to a regression task: the collected performance data are used to fit a regression function that is then used to scale future performances. The present article demonstrates that this approach is fundamentally flawed. It inherently creates a bias that unfairly penalizes athletes with certain allometric characteristics, and, by virtue of its adoption in the ranking and selection of elite athletes, propagates and strengthens this bias over time. The main flaws are shown to originate in the criteria for selecting the data used for regression, as well as in the manner in which the regression model is applied in normalization. This analysis brings into light the aforesaid methodological flaws and motivates further work on the development of principled methods, the foundations of which are also laid out in this work.

  8. Performance Assessment of the Wave Dragon Wave Energy Converter Based on the EquiMar Methodology

    DEFF Research Database (Denmark)

    Parmeggiani, Stefano; Chozas, Julia Fernandez; Pecher, Arthur

    2011-01-01

    At the present pre-commercial phase of the wave energy sector, device developers are called to provide reliable estimates on power performance and production at possible deployment locations. The EU EquiMar project has proposed a novel approach, where the performance assessment is based mainly...... on experimental data deriving from sea trials rather than solely on numerical predictions. The study applies this methodology to evaluate the performance of Wave Dragon at two locations in the North Sea, based on the data acquired during the sea trials of a 1:4.5 scale prototype. Indications about power...

  9. A global fouling factor methodology for analyzing steam generator thermal performance degradation

    International Nuclear Information System (INIS)

    Kreider, M.A.; White, G.A.; Varrin, R.D. Jr.

    1998-06-01

    Over the past few years, steam generator (SG) thermal performance degradation has led to decreased plant efficiency and power output at numerous PWR nuclear power plants with recirculating-type SGs. The authors have developed and implemented methodologies for quantitatively evaluating the various sources of SG performance degradation, both internal and external to the SG pressure boundary. These methodologies include computation of the global fouling factor history, evaluation of secondary deposit thermal resistance using deposit characterization data, and consideration of pressure loss causes unrelated to the tube bundle, such as hot-leg temperature streaming and SG moisture separator fouling. In order to evaluate the utility of the global fouling factor methodology, the authors performed case studies for a number of PWR SG designs. Key results from two of these studies are presented here. In tandem with the fouling-factor analyses, a study evaluated for each plant the potential causes of pressure loss. The combined results of the global fouling factor calculations and the pressure-loss evaluations demonstrated two key points: (1) that the available thermal margin against fouling, which can vary substantially from plant to plant, has an important bearing on whether a given plant exhibits losses in electrical generating capacity, and (2) that a wide variety of causes can result in SG thermal performance degradation

  10. Assessment of the performance of containment and surveillance equipment part 1: methodology

    International Nuclear Information System (INIS)

    Rezniczek, A.; Richter, B.

    2009-01-01

    Equipment performance aims at the creation of relevant data. As Containment and Surveillance (C/S) is playing an ever increasing role in safeguards systems, the issue of how to assess the performance of C/S equipment is being addressed by the ESARDA Working Group on C/S. The issue is important not only for the development of appropriate safeguards approaches but also for the review of existing approaches with regard to the implementation of the Additional Protocol (A P) and Integrated Safeguards. It is expected that the selection process of appropriate equipment, especially for unattended operation, is facilitated by the availability of methods to determine the performance of such equipment. Apart from EURATOM, the users of assessment methodologies would be the International Atomic Energy Agency (IAEA), plant operators, and instrument developers. The paper describes a non-quantitative performance assessment methodology. A structured procedure is outlined that allows assessing the suitability of different C/S instrumentation to comply with the objectives of its application. The principle to determine the performance of C/S equipment is to define, based on safeguards requirements, a task profile and to check the performance profile against the task profile. The performance profile of C/S equipment can be derived from the functional specifications and design basis tolerances provided by the equipment manufacturers.

  11. Methodologies for predicting the part-load performance of aero-derivative gas turbines

    DEFF Research Database (Denmark)

    Haglind, Fredrik; Elmegaard, Brian

    2009-01-01

    Prediction of the part-load performance of gas turbines is advantageous in various applications. Sometimes reasonable part-load performance is sufficient, while in other cases complete agreement with the performance of an existing machine is desirable. This paper is aimed at providing some guidance...... on methodologies for predicting part-load performance of aero-derivative gas turbines. Two different design models – one simple and one more complex – are created. Subsequently, for each of these models, the part-load performance is predicted using component maps and turbine constants, respectively. Comparisons...... with manufacturer data are made. With respect to the design models, the simple model, featuring a compressor, combustor and turbines, results in equally good performance prediction in terms of thermal efficiency and exhaust temperature as does a more complex model. As for part-load predictions, the results suggest...

  12. Thermal performance of a concrete cask: Methodology to model helium leakage from the steel canister

    International Nuclear Information System (INIS)

    Penalva, J.; Feria, F.; Herranz, L.E.

    2017-01-01

    Highlights: • A thermal analysis of the canister during a loss of leaktightness has been performed. • Methodologies that predict fuel temperatures and heat up rates have been developed. • Casks with heat loads below 20 kW would never exceed the thermal threshold. - Abstract: Concrete cask storage systems used in dry storage allocate spent fuel within containers that are usually filled with helium at a certain pressure. Potential leaks from the container would result in a cooling degradation of fuel that might jeopardize fuel integrity if temperature exceeded a threshold value. According to ISG-11, temperatures below 673 K ensure fuel integrity preservation. Therefore, the container thermal response to a loss of leaktightness is of utmost importance in terms of safety. In this work, a thermo-fluid dynamic analysis of the canister during a loss of leaktightness has been performed. To do so, steady-state and transient Computational Fluid Dynamics (CFD) simulations have been carried out. Likewise, it has been developed two methodologies capable of estimating peak fuel temperatures and heat up rates resulting from a postulated depressurization in a dry storage cask. One methodology is based on control theory and transfers functions, and the other methodology is based on a linear relationship between the inner pressure and the maximum temperature. Both methodologies have been verified through comparisons with CFD calculations. The period of time to achieve the temperature threshold (673 K) is a function of pressure loss rate and decay heat of the fuel stored in the container; in case of a fuel canister with 30 kW the period of time to reach the thermal limit takes between half day (fast pressure loss) and one week (slow pressure loss). In case of a 15% reduction of the decay heat, the period of time to achieve the thermal limit increase up to a few weeks. The results highlight that casks with heat loads below 20 kW would never exceed the thermal threshold (673 K).

  13. Calibration and Performance of the ATLAS Tile Calorimeter During the LHC Run 2

    CERN Document Server

    Klimek, Pawel; The ATLAS collaboration

    2018-01-01

    The Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. It also assists in muon identification. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs). The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. TileCal exploits several calibration systems: a Cs radioactive source that illuminates the scintillating tiles directly, a laser light system to directly test the PMT response, and a charge injection system (CIS) for the front-end electronics. These systems together with data collected during proton-proton collisions provide extensive monitoring of the instrument and a means...

  14. Design, performance, and calibration of the CMS hadron-outer calorimeter

    International Nuclear Information System (INIS)

    Abdullin, S.; Gavrilov, V.; Ilyina, N.; Kaftanov, V.; Kisselevich, I.; Kolossov, V.; Krokhotin, A.; Kuleshov, S.; Pozdnyakov, A.; Safronov, G.; Semenov, S.; Stolin, V.; Ulyanov, A.; Abramov, V.; Goncharov, P.; Kalinin, A.; Khmelnikov, A.; Korablev, A.; Korneev, Y.; Krinitsyn, A.; Kryshkin, V.; Lukanin, V.; Pikalov, V.; Ryazanov, A.; Talov, V.; Turchanovich, L.; Volkov, A.; Acharya, B.; Aziz, T.; Banerjee, Sudeshna; Banerjee, Sunanda; Bose, S.; Chendvankar, S.; Deshpande, P.V.; Dugad, S.; Ganguli, S.N.; Guchait, M.; Gurtu, A.; Kalmani, S.; Krishnaswamy, M.R.; Maity, M.; Majumder, G.; Mazumdar, K.; Mondal, N.; Nagaraj, P.; Narasimham, V.S.; Patil, M.; Reddy, L.; Satyanarayana, B.; Sharma, S.; Sudhakar, K.; Tonwar, S.; Verma, P.; Adam, N.; Fisher, W.; Halyo, V.; Hunt, A.; Jones, J.; Laird, E.; Landsberg, G.; Marlow, D.; Tully, C.; Werner, J.; Adams, M.; Bard, R.; Burchesky, K.; Qian, W.; Akchurin, N.; Berntzon, L.; Carrell, K.; Guemues, K.; Jeong, C.; Kim, H.; Lee, S.W.; Popescu, S.; Roh, Y.; Spezziga, M.; Thomas, R.; Volobouev, I.; Wigmans, R.; Yazgan, E.; Akgun, U.; Albayrak, E.; Ayan, S.; Clarida, W.; Debbins, P.; Duru, F.; Ingram, D.; Merlo, J.P.; Mestvirishvili, A.; Miller, M.; Moeller, A.; Norbeck, E.; Olson, J.; Onel, Y.; Ozok, F.; Schmidt, I.; Yetkin, T.; Anderson, E.W.; Hauptman, J.; Antchev, G.; Arcidy, M.; Hazen, E.; Heister, A.; Lawlor, C.; Lazic, D.; Machado, E.; Posch, C.; Rohlf, J.; Sulak, L.; Varela, F.; Wu, S.X.; Aydin, S.; Bakirci, M.N.; Cerci, S.; Dumanoglu, I.; Erturk, S.; Eskut, E.; Kayis-Topaksu, A.; Onengut, G.; Ozkurt, H.; Polatoz, A.; Sogut, K.; Topakli, H.; Vergili, M.; Baarmand, M.; Mermerkaya, H.; Ralich, R.M.; Vodopiyanov, I.; Babich, K.; Golutvin, I.; Kalagin, V.; Kosarev, I.; Ladygin, V.; Mescheryakov, G.; Moissenz, P.; Petrosyan, A.; Rogalev, E.; Smirnov, V.; Vishnevskiy, A.; Volodko, A.; Zarubin, A.; Baden, D.; Eno, S.; Grassi, T.; Jarvis, C.; Kellogg, R.; Kunori, S.; Skuja, A.; Wang, L.; Wetstein, M.; Barnes, V.; Laasanen, A.; Pompos, A.; Bawa, H.; Beri, S.; Bhandari, V.; Bhatnagar, V.; Kaur, M.; Kohli, J.; Kumar, A.; Singh, B.; Singh, J.B.; Baiatian, G.; Sirunyan, A.; Bencze, G.; Laszlo, A.; Pal, A.; Vesztergombi, G.; Zalan, P.; Bhatti, A.; Bodek, A.; Budd, H.; Chung, Y.; Barbaro, P. de; Haelen, T.; Bose, T.; Esen, S.; Vanini, A.; Camporesi, T.; Visser, T. de; Efthymiopoulos, I.; Cankocak, K.; Cremaldi, L.; Reidy, J.; Sanders, D.A.; Cushman, P.; Ma, Y.; Sherwood, B.; Damgov, J.; Piperov, S.; Deliomeroglu, M.; Guelmez, E.; Isiksal, E.; Kaya, M.; Kaya, O.; Ozkorucuklu, S.; Sonmez, N.; Demianov, A.; Ershov, A.; Gribushin, A.; Kodolova, O.; Petrushanko, S.; Sarycheva, L.; Teplov, K.; Vardanyan, I.; Diaz, J.; Gaultney, V.; Kramer, L.; Linn, S.; Lobolo, L.; Markowitz, P.; Martinez, G.; Dimitrov, L.; Genchev, V.; Vankov, I.; Elias, J.; Elvira, D.; Freeman, J.; Green, D.; Los, S.; Ronzhin, A.; Sergeyev, S.; Suzuki, I.; Vidal, R.; Whitmore, J.; Emeliantchik, I.; Mossolov, V.; Shumeiko, N.; Stefanovich, R.; Fenyvesi, A.; Gamsizkan, H.; Murat Gueler, A.; Ozkan, C.; Sekmen, S.; Serin, M.; Sever, R.; Zeyrek, M.; Gleyzer, S.; Hagopian, S.; Hagopian, V.; Johnson, K.; Grinev, B.; Lubinsky, V.; Senchishin, V.; Hashemi, M.; Mohammadi-Najafabadi, M.; Paktinat, S.; Heering, A.; Karmgard, D.; Ruchti, R.; Levchuk, L.; Sorokin, P.; Litvintsev, D.; Mans, J.; Penzo, A.; Podrasky, V.; Sanzeni, C.; Winn, D.; Vlassov, E.

    2008-01-01

    The Outer Hadron Calorimeter (HCAL HO) of the CMS detector is designed to measure the energy that is not contained by the barrel (HCAL HB) and electromagnetic (ECAL EB) calorimeters. Due to space limitation the barrel calorimeters do not contain completely the hadronic shower and an outer calorimeter (HO) was designed, constructed and inserted in the muon system of CMS to measure the energy leakage. Testing and calibration of the HO was carried out in a 300 GeV/c test beam that improved the linearity and resolution. HO will provide a net improvement in missing E T measurements at LHC energies. Information from HO will also be used for the muon trigger in CMS. (orig.)

  15. Design, Performance, and Calibration of CMS Hadron-Barrel Calorimeter Wedges

    CERN Document Server

    Baiatian, G; Emeliantchik, Igor; Massolov, V; Shumeiko, Nikolai; Stefanovich, R; Damgov, Jordan; Dimitrov, Lubomir; Genchev, Vladimir; Piperov, Stefan; Vankov, Ivan; Litov, Leander; Bencze, Gyorgy; Vesztergombi, Gyorgy; Zálán, Peter; Bawa, Harinder Singh; Beri, Suman Bala; Bhatnagar, Vipin; Kaur, Manjit; Kohli, Jatinder Mohan; Kumar, Arun; Singh, Jas Bir; Acharya, Bannaje Sripathi; Banerjee, Sunanda; Banerjee, Sudeshna; Chendvankar, Sanjay; Dugad, Shashikant; Kalmani, Suresh Devendrappa; Katta, S; Mazumdar, Kajari; Mondal, Naba Kumar; Nagaraj, P; Patil, Mandakini Ravindra; Reddy, L; Satyanarayana, B; Sudhakar, Katta; Verma, Piyush; Paktinat, S; Golutvin, Igor; Kalagin, Vladimir; Kosarev, Ivan; Mescheryakov, G; Sergeyev, S; Smirnov, Vitaly; Volodko, Anton; Zarubin, Anatoli; Gavrilov, Vladimir; Gershtein, Yuri; Kaftanov, Vitali; Kisselevich, I; Kolossov, V; Krokhotin, Andrey; Kuleshov, Sergey; Litvintsev, Dmitri; Stolin, Viatcheslav; Ulyanov, A; Demianov, A; Gribushin, Andrey; Kodolova, Olga; Petrushanko, Sergey; Sarycheva, Ludmila; Vardanyan, Irina; Yershov, A; Abramov, Victor; Goncharov, Petr; Khmelnikov, Alexander; Korablev, Andrey; Korneev, Yury; Krinitsyn, Alexander; Kryshkin, V; Lukanin, Vladimir; Pikalov, Vladimir; Ryazanov, Anton; Talov, Vladimir; Turchanovich, L; Volkov, Alexey; Camporesi, Tiziano; De Visser, Theo; Vlassov, E; Aydin, Sezgin; Dumanoglu, Isa; Eskut, Eda; Kayis-Topaksu, A; Kuzucu-Polatoz, A; Onengüt, G; Ozdes-Koca, N; Cankocak, Kerem; Ozok, Ferhat; Serin-Zeyrek, M; Sever, Ramazan; Zeyrek, Mehmet; Gülmez, Erhan; Isiksal, Engin; Kaya, Mithat; Ozkorucuklu, Suat; Levchuk, Leonid; Sorokin, Pavel; Grinev, B; Lubinsky, V; Senchishin, V; Anderson, E Walter; Hauptman, John M; Elias, John E; Elvira, D; Freeman, Jim; Green, Dan; Lazic, Dragoslav; Los, Serguei; O'Dell, Vivian; Ronzhin, Anatoly; Suzuki, Ichiro; Vidal, Richard; Whitmore, Juliana; Antchev, Georgy; Hazen, Eric; Lawlor, C; Machado, Emanuel; Posch, C; Rohlf, James; Wu, Shouxiang; Adams, Mark Raymond; Burchesky, Kyle; Qiang, W; Abdullin, Salavat; Baden, Drew; Bard, Robert; Eno, Sarah Catherine; Grassi, Tullio; Jarvis, Chad; Kellogg, Richard G; Kunori, Shuichi; Skuja, Andris; Podrasky, V; Sanzeni, Christopher; Winn, Dave; Akgun, Ugur; Ayan, S; Duru, Firdevs; Merlo, Jean-Pierre; Mestvirishvili, Alexi; Miller, Michael; Norbeck, Edwin; Olson, Jonathan; Onel, Yasar; Schmidt, Ianos; Akchurin, Nural; Carrell, Kenneth Wayne; Gumu, K; Thomas, Ray; Baarmand, Marc M; Ralich, Robert; Vodopiyanov, Igor; Cushman, Priscilla; Heering, Arjan Hendrix; Sherwood, Brian; Cremaldi, Lucien Marcus; Reidy, Jim; Sanders, David A; Karmgard, Daniel John; Ruchti, Randy; Fisher, Wade Cameron; Mans, Jeremy; Tully, Christopher; De Barbaro, Pawel; Bodek, Arie; Budd, Howard; Chung, Yeon Sei; Haelen, T; Imboden, Matthias; Hagopian, Sharon; Hagopian, Vasken; Johnson, Kurtis F; Barnes, Virgil E; Laasanen, Alvin T; Pompos, Arnold

    2007-01-01

    Extensive measurements have been made with pions, electrons and muons on four production wedges of the Compact Muon Solenoid (CMS) hadron barrel (HB) calorimeter in the H2 beam line at CERN with particle momenta varying from 20 to 300 GeV/c. Data were taken both with and without a prototype electromagnetic lead tungstate crystal calorimeter (EB) in front of the hadron calorimeter. The time structure of the events was measured with the full chain of preproduction front-end electronics running at 34 MHz. Moving-wire radioactive source data were also collected for all scintillator layers in the HB. These measurements set the absolute calibration of the HB prior to first pp collisions to approximately 4%.

  16. Design, performance, and calibration of the CMS hadron-outer calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Abdullin, S.; Gavrilov, V.; Ilyina, N.; Kaftanov, V.; Kisselevich, I.; Kolossov, V.; Krokhotin, A.; Kuleshov, S.; Pozdnyakov, A.; Safronov, G.; Semenov, S.; Stolin, V.; Ulyanov, A. [ITEP, Moscow (Russian Federation); Abramov, V.; Goncharov, P.; Kalinin, A.; Khmelnikov, A.; Korablev, A.; Korneev, Y.; Krinitsyn, A.; Kryshkin, V.; Lukanin, V.; Pikalov, V.; Ryazanov, A.; Talov, V.; Turchanovich, L.; Volkov, A. [IHEP, Protvino (Russian Federation); Acharya, B.; Aziz, T.; Banerjee, Sudeshna; Banerjee, Sunanda; Bose, S.; Chendvankar, S.; Deshpande, P.V.; Dugad, S.; Ganguli, S.N.; Guchait, M.; Gurtu, A.; Kalmani, S.; Krishnaswamy, M.R.; Maity, M.; Majumder, G.; Mazumdar, K.; Mondal, N.; Nagaraj, P.; Narasimham, V.S.; Patil, M.; Reddy, L.; Satyanarayana, B.; Sharma, S.; Sudhakar, K.; Tonwar, S.; Verma, P. [Tata Inst. of Fundamental Research, Mumbai (India); Adam, N.; Fisher, W.; Halyo, V.; Hunt, A.; Jones, J.; Laird, E.; Landsberg, G.; Marlow, D.; Tully, C.; Werner, J. [Princeton Univ., NJ (United States); Adams, M.; Bard, R.; Burchesky, K.; Qian, W. [Univ. of Illinois, Chicago, IL (United States); Akchurin, N.; Berntzon, L.; Carrell, K.; Guemues, K.; Jeong, C.; Kim, H.; Lee, S.W.; Popescu, S.; Roh, Y.; Spezziga, M.; Thomas, R.; Volobouev, I.; Wigmans, R.; Yazgan, E. [Texas Tech Univ., Lubbock, TX (United States); Akgun, U.; Albayrak, E.; Ayan, S.; Clarida, W.; Debbins, P.; Duru, F.; Ingram, D.; Merlo, J.P.; Mestvirishvili, A.; Miller, M.; Moeller, A.; Norbeck, E.; Olson, J.; Onel, Y.; Ozok, F.; Schmidt, I.; Yetkin, T. [Univ. of Iowa, Iowa City, IA (United States); Anderson, E.W.; Hauptman, J. [Iowa State Univ., Ames, IA (United States); Antchev, G.; Arcidy, M.; Hazen, E.; Heister, A.; Lawlor, C.; Lazic, D.; Machado, E.; Posch, C.; Rohlf, J.; Sulak, L.; Varela, F.; Wu, S.X. [Boston Univ., MA (United States); Aydin, S.; Bakirci, M.N.; Cerci, S.; Dumanoglu, I.; Erturk, S.; Eskut, E.; Kayis-Topaksu, A.; Onengut, G.; Ozkurt, H.; Polatoz, A.; Sogut, K. [and others

    2008-10-15

    The Outer Hadron Calorimeter (HCAL HO) of the CMS detector is designed to measure the energy that is not contained by the barrel (HCAL HB) and electromagnetic (ECAL EB) calorimeters. Due to space limitation the barrel calorimeters do not contain completely the hadronic shower and an outer calorimeter (HO) was designed, constructed and inserted in the muon system of CMS to measure the energy leakage. Testing and calibration of the HO was carried out in a 300 GeV/c test beam that improved the linearity and resolution. HO will provide a net improvement in missing E{sub T} measurements at LHC energies. Information from HO will also be used for the muon trigger in CMS. (orig.)

  17. EGRET Unidentified Source Radio Observations and Performance of Receiver Gain Calibration

    International Nuclear Information System (INIS)

    Niinuma, Kotaro; Asuma, Kuniyuki; Kuniyoshi, Masaya; Matsumura, Nobuo; Takefuji, Kazuhiro; Kida, Sumiko; Takeuchi, Akihiko; Ichikawa, Hajime; Sawano, Akihiro; Yoshimura, Naoya; Suzuki, Shigehiro; Nakamura, Ryosuke; Nakayama, Yu; Daishido, Tsuneaki

    2006-01-01

    Last year, we have developed the receiver gain calibration system by using Johnson-Nyquist noise, for accuracy flux measurement, because we have been starting radio identification program of transient radio sources, blazars and radio counterpart of The Energetic Gamma Ray Experiment Telescope (EGRET) unidentified γ-ray sources in Waseda Nasu Pulsar Observatory. It is shown that there are a few low correlation data between receiver gain and ambient temperature around receiver for anything troubles of receiver, because we can detect gain and ambient temperature through a day by developed system. Estimated fluctuations of daily data of steady sources decrease by removing low correlation data before analysing. As the result of our analysis by using above system, radio counterpart of EGRET identified source showed fading light-curve for a week

  18. Enhancing the Benefit of the Chemical Mixture Methodology: A Report on Methodology Testing and Potential Approaches for Improving Performance

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xiao-Ying; Yao, Juan; He, Hua; Glantz, Clifford S.; Booth, Alexander E.

    2012-01-01

    Extensive testing shows that the current version of the Chemical Mixture Methodology (CMM) is meeting its intended mission to provide conservative estimates of the health effects from exposure to airborne chemical mixtures. However, the current version of the CMM could benefit from several enhancements that are designed to improve its application of Health Code Numbers (HCNs) and employ weighting factors to reduce over conservatism.

  19. Acceleration-based methodology to assess the blast mitigation performance of explosive ordnance disposal helmets

    Science.gov (United States)

    Dionne, J. P.; Levine, J.; Makris, A.

    2018-01-01

    To design the next generation of blast mitigation helmets that offer increasing levels of protection against explosive devices, manufacturers must be able to rely on appropriate test methodologies and human surrogates that will differentiate the performance level of various helmet solutions and ensure user safety. Ideally, such test methodologies and associated injury thresholds should be based on widely accepted injury criteria relevant within the context of blast. Unfortunately, even though significant research has taken place over the last decade in the area of blast neurotrauma, there currently exists no agreement in terms of injury mechanisms for blast-induced traumatic brain injury. In absence of such widely accepted test methods and injury criteria, the current study presents a specific blast test methodology focusing on explosive ordnance disposal protective equipment, involving the readily available Hybrid III mannequin, initially developed for the automotive industry. The unlikely applicability of the associated brain injury criteria (based on both linear and rotational head acceleration) is discussed in the context of blast. Test results encompassing a large number of blast configurations and personal protective equipment are presented, emphasizing the possibility to develop useful correlations between blast parameters, such as the scaled distance, and mannequin engineering measurements (head acceleration). Suggestions are put forward for a practical standardized blast testing methodology taking into account limitations in the applicability of acceleration-based injury criteria as well as the inherent variability in blast testing results.

  20. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    National Research Council Canada - National Science Library

    Yang, Kyoung

    2005-01-01

    This final report summarizes the progress during the Phase I SBIR project entitled "Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays...

  1. The geological model calibration - Learnings from integration of reservoir geology and field performance - Example from the upper carboniferous reservoirs of the Southern North Sea

    NARCIS (Netherlands)

    Moscariello, A.; Hoof, T.B. van; Kunakbayeva, G.; Veen, J.H. ten; Belt, F. van den; Twerda, A.; Peters, L.; Davis, P.; Williams, H.

    2013-01-01

    The Geological Model Calibration - Learnings from Integration of Reservoir Geology and Field Performance: example from the Upper Carboniferous Reservoirs of the Southern North Sea. Copyright © (2012) by the European Association of Geoscientists & Engineers All rights reserved.

  2. Demonstration of a performance assessment methodology for high-level radioactive waste disposal in basalt formations

    International Nuclear Information System (INIS)

    Bonano, E.J.; Davis, P.A.; Shipers, L.R.; Brinster, K.F.; Beyler, W.E.; Updegraff, C.D.; Shepherd, E.R.; Tilton, L.M.; Wahi, K.K.

    1989-06-01

    This document describes a performance assessment methodology developed for a high-level radioactive waste repository mined in deep basalt formations. This methodology is an extension of an earlier one applicable to bedded salt. The differences between the two methodologies arise primarily in the modeling of round-water flow and radionuclide transport. Bedded salt was assumed to be a porous medium, whereas basalt formations contain fractured zones. Therefore, mathematical models and associated computer codes were developed to simulate the aforementioned phenomena in fractured media. The use of the methodology is demonstrated at a hypothetical basalt site by analyzing seven scenarios: (1) thermohydrological effects caused by heat released from the repository, (2) mechanohydrological effects caused by an advancing and receding glacier, (3) normal ground-water flow, (4) pumping of ground water from a confined aquifer, (5) rerouting of a river near the repository, (6) drilling of a borehole through the repository, and (7) formation of a new fault intersecting the repository. The normal ground-water flow was considered the base-case scenario. This scenario was used to perform uncertainty and sensitivity analyses and to demonstrate the existing capabilities for assessing compliance with the ground-water travel time criterion and the containment requirements. Most of the other scenarios were considered perturbations of the base case, and a few were studied in terms of changes with respect to initial conditions. The potential impact of these scenarios on the long-term performance of the disposal system was ascertained through comparison with the base-case scenario or the undisturbed initial conditions. 66 refs., 106 figs., 27 tabs

  3. A global fouling factor methodology for analyzing steam generator thermal performance degradation

    International Nuclear Information System (INIS)

    Kreider, M.A.; White, G.A.; Varrin, R.D.

    1998-01-01

    Over the past few years, steam generator (SG) thermal performance degradation has led to decreased plant efficiency and power output at numerous PWR nuclear power plants with recirculating-type SGs. The authors have developed and implemented methodologies for quantitatively evaluating the various sources of SG performance degradation, both internal and external to the SG pressure boundary. These methodologies include computation of the global fouling factor history, evaluation of secondary deposit thermal resistance using deposit characterization data, and consideration of pressure loss causes unrelated to the tube bundle, such as hot-leg temperature streaming and SG moisture separator performance. In order to evaluate the utility of the global fouling factor methodology, the authors performed case studies for a number of PWR SG designs. Key results from two of these studies are presented here. Uncertainty analyses were performed to determine whether the calculated fouling factor for each plant represented significant fouling or whether uncertainty in key variables (e.g., steam pressure or feedwater flow rate) could be responsible for calculated fouling. The methodology was validated using two methods: by predicting the SG pressure following chemical cleaning at San Onofre 2 and also by performing a sensitivity study with the industry-standard thermal-hydraulics code ATHOS to investigate the effects of spatially varying tube scale distributions. This study indicated that the average scale thickness has a greater impact on fouling than the spatial distribution, showing that the assumption of uniform resistance inherent to the global fouling factor is reasonable. In tandem with the fouling-factor analyses, a study evaluated for each plant the potential causes of pressure loss. The combined results of the global fouling factor calculations and the pressure loss evaluations demonstrated two key points: 1) that the available thermal margin against fouling, which can

  4. Application of the Biosphere Assessment Methodology to the ENRESA, 1997 Performance and Safety Assessment

    International Nuclear Information System (INIS)

    Pinedo, P.; Simon, I.; Aguero, A.

    1998-01-01

    For several years CIEMAT has been developing for ENRESA knowledge and tools to support the modelling of the migration and accumulation of radionuclides within the biosphere once those radionuclides are released or reach one or more parts of the biosphere (atmosphere, water bodies or soils). The model development also includes evaluation of radiological impacts arising from the resulting distribution of radionuclides in the biosphere. In 1996, a Methodology to analyse the biosphere in this context proposed to ENRESA. The level of development of the different aspects proposed within the Methodology was quite heterogeneous and, while aspects of radionuclide transport modelling were already well developed in theoretical and practical terms, other aspects like the procedure for conceptual model development and the description of biosphere system representatives of the long term needed further developments. At present, the International Atomic Energy Agency (IAEA) Programme on Biosphere Modelling and Assessment (BIOMASS) in collaboration with several national organizations, ENRESA and CIEMAT among them, is working to complete and augment the Reference Biosphere Methodology and to produce some practical descriptions of Reference Systems. The overall purpose of this document is to apply the Methodology, taking account of on-going developments in biosphere modelling, to the last performance assessment (PA) exercise made by ENRESA (ENRESA, 1997), using from it the general and particular information about the assessment context, radionuclide information, geosphere and geobiosphere interface data. There are three particular objectives to this work: (a) to determine the practicability of the Methodology in an application to a realistic assessment situation, (b) To compare and contrast previous biosphere modelling in HLW PA and, (c) to test software development related with data management and modelling. (Author) 42 refs

  5. Assessing the Impact of Clothing and Individual Equipment (CIE) on Soldier Physical, Biomechanical, and Cognitive Performance Part 1: Test Methodology

    Science.gov (United States)

    2018-02-01

    29 during Soldier Equipment Configuration Impact on Performance: Establishing a Test Methodology for the...during ACSM’S resource manual for exercise testing and prescription Human Movement Science, 31(2), Proceedings of the 2016 American Biomechanics...Performance of Medium Rucksack Prototypes An investigation: Comparison of live-fire and weapon simulator test methodologies and the of three extremity armor

  6. Quick Green Scan: A Methodology for Improving Green Performance in Terms of Manufacturing Processes

    Directory of Open Access Journals (Sweden)

    Aldona Kluczek

    2017-01-01

    Full Text Available The heating sector has begun implementing technologies and practices to tackle the environmental and social–economic problems caused by their production process. The purpose of this paper is to develop a methodology, “the Quick-Green-Scan”, that caters for the need of quick assessment decision-makers to improve green manufacturing performance in companies that produce heating devices. The study uses a structured approach that integrates Life Cycle Assessment-based indicators, framework and linguistic scales (fuzzy numbers to evaluate the extent of greening of the enterprise. The evaluation criteria and indicators are closely related to the current state of technology, which can be improved. The proposed methodology has been created to answer the question whether a company acts on the opportunity to be green and whether these actions are contributing towards greening, maintaining the status quo or moving away from a green outcome. Results show that applying the proposed improvements in processes helps move the facility towards being a green enterprise. Moreover, the methodology, being particularly quick and simple, is a practical tool for benchmarking, not only in the heating industry, but also proves useful in providing comparisons for facility performance in other manufacturing sectors.

  7. Demonstration of a performance assessment methodology for nuclear waste isolation in basalt formations

    International Nuclear Information System (INIS)

    Bonano, E.J.; Davis, P.A.

    1988-01-01

    This paper summarizes the results of the demonstration of a performance assessment methodology developed by Sandia National Laboratories, Albuquerque for the US Nuclear Regulatory Commission for use in the analysis of high-level radioactive waste disposal in deep basalts. Seven scenarios that could affect the performance of a repository in basalts were analyzed. One of these scenarios, normal ground-water flow, was called the base-case scenario. This was used to demonstrate the modeling capabilities in the methodology necessary to assess compliance with the ground-water travel time criterion. The scenario analysis consisted of both scenario screening and consequence modeling. Preliminary analyses of scenarios considering heat released from the waste and the alteration of the hydraulic properties of the rock mass due to loads created by a glacier suggested that these effects would not be significant. The analysis of other scenarios indicated that those changing the flow field in the vicinity of the repository would have an impact on radionuclide discharges, while changes far from the repository may not be significant. The analysis of the base-case scenario was used to show the importance of matrix diffusion as a radionuclide retardation mechanism in fractured media. The demonstration of the methodology also included an overall sensitivity analysis to identify important parameters and/or processes. 15 refs., 13 figs., 2 tabs

  8. THE PERFORMANCE ANALYSIS OF A PACKED COLUMN : CALIBRATION OF AN ORIFICE

    Directory of Open Access Journals (Sweden)

    Aynur ŞENOL

    2003-01-01

    Full Text Available Investigations to develop data for this study were made using a pilot scale glass column of 9 cm inside diameter randomly filled to a depth of 1.90 cm with a Raschig type ring at a slightly modified geometry. The geometrical characteristics of packing are: the total area of a single particle ad = 2.3 cm2; specific area ap = 10.37 cm2/cm3; voidage ? = 0.545 m3/m3. The efficiency tests were run using trichloroethylene/n-heptane system under total reflux conditions. Using the modified versions of the Eckert flooding model and the Bravo effective area (ae approach, as well as the Onda wetted area (aw and individual mass transfer coefficient models, it has been attempted to estimate the packing efficiency theoretically. This article also deals with the design strategies attributed to a randomly packed column. Emphasis is mainly placed on the way to formulate an algorithm of designing a pilot scale column through the models being attributed to the film theory. Using the column dry pressure drop properties based on the air flowing it has been achieved a generalized flow rate approach for calibrating of an orifice through which the air passes.

  9. Methodology to extract of humic substances of lombricompost and evaluation of their performance

    International Nuclear Information System (INIS)

    Torrente Trujillo, Armando; Gomez Zambrano, Jairo

    1995-01-01

    The present works was developed at the facultad de ciencias agropecuarias of the Universidad Nacional de Colombia, located in Palmira City Valle del Cauca. The research consisted in the development of the appropriate methodology to extract humic substances contained in lombricompost and on the other hand to evaluate the performance in organic carbon of the fulvic and humic acids. The lombricompost source consisted in organic matter such as: dug cow, filter press cake, coffee pulp and Paspalum notatum with and without application of lime. The results showed sixteen steps, which are completely described in the work, obtain the proposal methodology. By the other hand this method showed that humic acids in the lombricompost are richer than fulvic ones; besides among the four sources used in the experiment the filter press cake was different and higher in carbon yield than coffee pulp and Paspalum notatum

  10. Reference Performance Test Methodology for Degradation Assessment of Lithium-Sulfur Batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel-Ioan; Purkayastha, Rajlakshmi

    2018-01-01

    Lithium-Sulfur (Li-S) is an emerging battery technology receiving a growing amount of attention due to its potentially high gravimetric energy density, safety, and low production cost. However, there are still some obstacles preventing its swift commercialization. Li-S batteries are driven...... by different electrochemical processes than commonly used Lithium-ion batteries, which often results in very different behavior. Therefore, the testing and modeling of these systems have to be adjusted to reflect their unique behavior and to prevent possible bias. A methodology for a Reference Performance Test...... (RPT) for the Li-S batteries is proposed in this study to point out Li-S battery features and provide guidance to users how to deal with them and possible results into standardization. The proposed test methodology is demonstrated for 3.4 Ah Li-S cells aged under different conditions....

  11. Development of a methodology for the evaluation of radiation protection performance and management in nuclear power plants

    International Nuclear Information System (INIS)

    Schieber, Caroline; Bataille, Celine; Cordier, Gerard; Delabre, Herve; Jeannin, Bernard

    2008-01-01

    This paper describes a specific methodology adopted by Electricite de France to perform the evaluation of radiation protection performance and management within its 19 nuclear power plants. The results obtained in 2007 are summed up. (author)

  12. Generalized Characterization Methodology for Performance Modelling of Lithium-Ion Batteries

    DEFF Research Database (Denmark)

    Stroe, Daniel Loan; Swierczynski, Maciej Jozef; Stroe, Ana-Irina

    2016-01-01

    Lithium-ion (Li-ion) batteries are complex energy storage devices with their performance behavior highly dependent on the operating conditions (i.e., temperature, load current, and state-of-charge (SOC)). Thus, in order to evaluate their techno-economic viability for a certain application, detailed...... information about Li-ion battery performance behavior becomes necessary. This paper proposes a comprehensive seven-step methodology for laboratory characterization of Li-ion batteries, in which the battery’s performance parameters (i.e., capacity, open-circuit voltage (OCV), and impedance) are determined...... and their dependence on the operating conditions are obtained. Furthermore, this paper proposes a novel hybrid procedure for parameterizing the batteries’ equivalent electrical circuit (EEC), which is used to emulate the batteries’ dynamic behavior. Based on this novel parameterization procedure, the performance model...

  13. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  14. Field calibration of cup anemometers

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten

    2007-01-01

    A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve...... the statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited...

  15. A methodology for energy performance classification of residential building stock of Hamirpur

    Directory of Open Access Journals (Sweden)

    Aniket Sharma

    2017-12-01

    Full Text Available In India, there are various codes, standards, guidelines and rating systems launched to make energy intensive and large sized buildings energy efficient whereas independent residential buildings are not covered even though they exist most in numbers of total housing stock. This paper presents a case study methodology for energy performance assessment of existing residential stock of Hamirpur that can be used to develop suitable energy efficiency regulations. The paper discusses the trend of residential development in Hamirpur followed by classification based on usage, condition, predominant material use, ownership size and number of rooms, source of lighting, assets available, number of storey and plot sizes using primary and secondary data. It results in identification of predominant materials used and other characteristics in each of urban and rural area. Further cradle to site embodied energy index of various dominant building materials and their market available alternative materials is calculated from secondary literature and by calculating transportation energy. One representative existing building is selected in each of urban and rural area and their energy performance is evaluated for material embodied energy and operational energy using simulation. Further alternatives are developed based on other dominant materials in each area and evaluated for change in embodied and operational energy. This paper identifies the energy performance of representative houses for both areas and in no way advocates the preference of one type over another. The paper demonstrates a methodology by which energy performance assessment of houses shall be done and also highlights further research.

  16. Wind Energy Development in India and a Methodology for Evaluating Performance of Wind Farm Clusters

    Directory of Open Access Journals (Sweden)

    Sanjeev H. Kulkarni

    2016-01-01

    Full Text Available With maturity of advanced technologies and urgent requirement for maintaining a healthy environment with reasonable price, India is moving towards a trend of generating electricity from renewable resources. Wind energy production, with its relatively safer and positive environmental characteristics, has evolved from a marginal activity into a multibillion dollar industry today. Wind energy power plants, also known as wind farms, comprise multiple wind turbines. Though there are several wind-mill clusters producing energy in different geographical locations across the world, evaluating their performance is a complex task and is an important focus for stakeholders. In this work an attempt is made to estimate the performance of wind clusters employing a multicriteria approach. Multiple factors that affect wind farm operations are analyzed by taking experts opinions, and a performance ranking of the wind farms is generated. The weights of the selection criteria are determined by pairwise comparison matrices of the Analytic Hierarchy Process (AHP. The proposed methodology evaluates wind farm performance based on technical, economic, environmental, and sociological indicators. Both qualitative and quantitative parameters were considered. Empirical data were collected through questionnaire from the selected wind farms of Belagavi district in the Indian State of Karnataka. This proposed methodology is a useful tool for cluster analysis.

  17. Calibration of detectors type CR-39 for methodology implementation for Radon-222 determination in CRCN-NE, Brazil; Calibração de detectores do tipo CR-39 para implementação da metodologia de determinação de radônio-222 no CRCN-NE

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Karolayne E.M. da; Santos, Mariana L. de O.; Amaral, Déric S. do; Vilela, Eldice C.; França, Elvis J. de; Hazin, Clovis A.; Farias, Emerson E.G. de, E-mail: keesthefany@gmail.com, E-mail: marianasantos_ufpe@hotmail.com, E-mail: dericsoares@gmail.com, E-mail: ecvilela@cnen.gov.br, E-mail: ejfrana@cnen.gov.br, E-mail: chazin@cnen.gov.br, E-mail: emersonemiliamo@yahoo.com.br [Centro Regional de Ciências Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2017-07-01

    Radon-222 is a radioactive gas, a product of the decay of uranium-238, which emits alpha particles and represents more than 50% of the dose of natural radiation received by the population. Therefore, monitoring of this gas is essential. For indoor measurement, solid state detectors can be used, the most common of which is CR-39. For monitoring using CR-39, alpha particles, generated by radon-222 and the daughter radionuclides, strike the surface of the detector and generate traces. To relate the trace density per exposure area in environments with unknown activity concentration, it is necessary to determine the calibration factor. The objective of this study was to calibrate CR-39 type detectors for the implementation of the radon determination methodology in Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE of Brazilian Nuclear Energy Commission - CNEN. In order to determine the CR-39 calibration factor, 19 exposures of the detectors were performed in the CRCN-NE calibration chamber (RN1-CRCN) at an activity of 5.00 kBq m{sup -3}, with the exposure time varying from 24 to 850 hours. For the detection of the detectors, sodium hydroxide was used in a thermostat bath at 90 ° C for 5 hours. The count of number of traits per unit of field was performed with the aid of optical microscopy with an increase of 100 times, being read 30 fields per dosimeters. As a result, the calibration factor was obtained, and the linear response of the trace density as a function of exposure was observed. The results allow the use of CR-39 in the determination of radon-222 by CRCN-NE.

  18. Polydiagnostic calibration performed on a low pressure surface wave sustained argon plasma

    International Nuclear Information System (INIS)

    Vries, N de; Iordanova, E I; Van Veldhuizen, E M; Mullen, J J A M van der; Palomares, J M

    2008-01-01

    The electron density and electron temperature of a low pressure surface wave sustained argon plasma have been determined using passive and active (laser) spectroscopic methods simultaneously. In this way the validity of the various techniques is established while the plasma properties are determined more precisely. The electron density, n e , is determined with Thomson scattering (TS), absolute continuum measurements, Stark broadening and an extrapolation of the atomic state distribution function (ASDF). The electron temperature, T e , is obtained using TS and absolute line intensity (ALI) measurements combined with a collisional-radiative (CR) model for argon. At an argon pressure of 15 mbar, the n e values obtained with TS and Stark broadening agree with each other within the error bars and are equal to (4 ± 0.5) x 10 19 m -3 , whereas the n e value (2 ± 0.5) x 10 19 m -3 obtained from the continuum is about 30% lower. This suggests that the used formula and cross-section values for the continuum method have to be reconsidered. The electron density determined by means of extrapolation of the ASDF to the continuum is too high (∼10 20 m -3 ). This is most probably related to the fact that the plasma is strongly ionizing so that the extrapolation method is not justified. At 15 mbar, the T e values obtained with TS are equal to 13 400 ± 1100 K while the ALI/CR-model yields an electron temperature that is about 10% lower. It can be concluded that the passive results are in good or fair agreement with the active results. Therefore, the calibrated passive methods can be applied to other plasmas in a similar regime for which active diagnostic techniques cannot be used.

  19. Polydiagnostic calibration performed on a low pressure surface wave sustained argon plasma

    Energy Technology Data Exchange (ETDEWEB)

    Vries, N de; Iordanova, E I; Van Veldhuizen, E M; Mullen, J J A M van der [Department of Applied Physics, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven (Netherlands); Palomares, J M [Departamento de Fisica, Universidad de Cordoba, Campus de Rabanales, ed. C-2, 14071 Cordoba (Spain)], E-mail: j.j.a.m.v.d.Mullen@tue.nl

    2008-10-21

    The electron density and electron temperature of a low pressure surface wave sustained argon plasma have been determined using passive and active (laser) spectroscopic methods simultaneously. In this way the validity of the various techniques is established while the plasma properties are determined more precisely. The electron density, n{sub e}, is determined with Thomson scattering (TS), absolute continuum measurements, Stark broadening and an extrapolation of the atomic state distribution function (ASDF). The electron temperature, T{sub e}, is obtained using TS and absolute line intensity (ALI) measurements combined with a collisional-radiative (CR) model for argon. At an argon pressure of 15 mbar, the n{sub e} values obtained with TS and Stark broadening agree with each other within the error bars and are equal to (4 {+-} 0.5) x 10{sup 19} m{sup -3}, whereas the n{sub e} value (2 {+-} 0.5) x 10{sup 19} m{sup -3} obtained from the continuum is about 30% lower. This suggests that the used formula and cross-section values for the continuum method have to be reconsidered. The electron density determined by means of extrapolation of the ASDF to the continuum is too high ({approx}10{sup 20} m{sup -3}). This is most probably related to the fact that the plasma is strongly ionizing so that the extrapolation method is not justified. At 15 mbar, the T{sub e} values obtained with TS are equal to 13 400 {+-} 1100 K while the ALI/CR-model yields an electron temperature that is about 10% lower. It can be concluded that the passive results are in good or fair agreement with the active results. Therefore, the calibrated passive methods can be applied to other plasmas in a similar regime for which active diagnostic techniques cannot be used.

  20. FPGA hardware acceleration for high performance neutron transport computation based on agent methodology - 318

    International Nuclear Information System (INIS)

    Shanjie, Xiao; Tatjana, Jevremovic

    2010-01-01

    The accurate, detailed and 3D neutron transport analysis for Gen-IV reactors is still time-consuming regardless of advanced computational hardware available in developed countries. This paper introduces a new concept in addressing the computational time while persevering the detailed and accurate modeling; a specifically designed FPGA co-processor accelerates robust AGENT methodology for complex reactor geometries. For the first time this approach is applied to accelerate the neutronics analysis. The AGENT methodology solves neutron transport equation using the method of characteristics. The AGENT methodology performance was carefully analyzed before the hardware design based on the FPGA co-processor was adopted. The most time-consuming kernel part is then transplanted into the FPGA co-processor. The FPGA co-processor is designed with data flow-driven non von-Neumann architecture and has much higher efficiency than the conventional computer architecture. Details of the FPGA co-processor design are introduced and the design is benchmarked using two different examples. The advanced chip architecture helps the FPGA co-processor obtaining more than 20 times speed up with its working frequency much lower than the CPU frequency. (authors)

  1. Methodology for thermal hydraulic conceptual design and performance analysis of KALIMER core

    International Nuclear Information System (INIS)

    Young-Gyun Kim; Won-Seok Kim; Young-Jin Kim; Chang-Kue Park

    2000-01-01

    This paper summarizes the methodology for thermal hydraulic conceptual design and performance analysis which is used for KALIMER core, especially the preliminary methodology for flow grouping and peak pin temperature calculation in detail. And the major technical results of the conceptual design for the KALIMER 98.03 core was shown and compared with those of KALIMER 97.07 design core. The KALIMER 98.03 design core is proved to be more optimized compared to the 97.07 design core. The number of flow groups are reduced from 16 to 11, and the equalized peak cladding midwall temperature from 654 deg. C to 628 deg. C. It was achieved from the nuclear and thermal hydraulic design optimization study, i.e. core power flattening and increase of radial blanket power fraction. Coolant flow distribution to the assemblies and core coolant/component temperatures should be determined in core thermal hydraulic analysis. Sodium flow is distributed to core assemblies with the overall goal of equalizing the peak cladding midwall temperatures for the peak temperature pin of each bundle, thus pin cladding damage accumulation and pin reliability. The flow grouping and the peak pin temperature calculation for the preliminary conceptual design is performed with the modules ORFCE-F60 and ORFCE-T60 respectively. The basic subchannel analysis will be performed with the SLTHEN code, and the detailed subchannel analysis will be done with the MATRA-LMR code which is under development for the K-Core system. This methodology was proved practical to KALIMER core thermal hydraulic design from the related benchmark calculation studies, and it is used to KALIMER core thermal hydraulic conceptual design. (author)

  2. Providing hierarchical approach for measuring supply chain performance using AHP and DEMATEL methodologies

    Directory of Open Access Journals (Sweden)

    Ali Najmi

    2010-06-01

    Full Text Available Measuring the performance of a supply chain is normally of a function of various parameters. Such a problem often involves in a multiple criteria decision making (MCMD problem where different criteria need to be defined and calculated, properly. During the past two decades, Analytical hierarchy procedure (AHP and DEMATEL have been some of the most popular MCDM approaches for prioritizing various attributes. The study of this paper uses a new methodology which is a combination of AHP and DEMATEL to rank various parameters affecting the performance of the supply chain. The DEMATEL is used for understanding the relationship between comparison metrics and AHP is used for the integration to provide a value for the overall performance.

  3. PROCAL: A Set of 40 Peptide Standards for Retention Time Indexing, Column Performance Monitoring, and Collision Energy Calibration.

    Science.gov (United States)

    Zolg, Daniel Paul; Wilhelm, Mathias; Yu, Peng; Knaute, Tobias; Zerweck, Johannes; Wenschuh, Holger; Reimer, Ulf; Schnatbaum, Karsten; Kuster, Bernhard

    2017-11-01

    Beyond specific applications, such as the relative or absolute quantification of peptides in targeted proteomic experiments, synthetic spike-in peptides are not yet systematically used as internal standards in bottom-up proteomics. A number of retention time standards have been reported that enable chromatographic aligning of multiple LC-MS/MS experiments. However, only few peptides are typically included in such sets limiting the analytical parameters that can be monitored. Here, we describe PROCAL (ProteomeTools Calibration Standard), a set of 40 synthetic peptides that span the entire hydrophobicity range of tryptic digests, enabling not only accurate determination of retention time indices but also monitoring of chromatographic separation performance over time. The fragmentation characteristics of the peptides can also be used to calibrate and compare collision energies between mass spectrometers. The sequences of all selected peptides do not occur in any natural protein, thus eliminating the need for stable isotope labeling. We anticipate that this set of peptides will be useful for multiple purposes in individual laboratories but also aiding the transfer of data acquisition and analysis methods between laboratories, notably the use of spectral libraries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Exercise of laboratory comparison for contamination monitor calibration between LNMRI/IRD and LCR/UERJ - 2016

    International Nuclear Information System (INIS)

    Cabral, T.S.; David, M.

    2016-01-01

    This work was motivated by the need to decide on the best methodology to be applied in the next contamination monitor calibration comparisons with the Brazilian network of calibration radiation monitors. The calibration factor was chosen as a response calibration performed in the four monitors used in this comparison because it does not require the detector area or probe thereby reducing an important variable. It was observed that the variation of the positioning system may have an influence up to 10% in calibration. The results obtained for the calibration factor showed a difference of up to 31.2%. (author)

  5. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Reda, Ibrahim; Robinson, Justin

    2016-07-01

    Accurate solar radiation data sets are critical to reducing the expenses associated with mitigating performance risk for solar energy conversion systems, and they help utility planners and grid system operators understand the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of calibration methodologies and the resulting calibration responsivities provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these radiometers are calibrated indoors, and some are calibrated outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The reference radiometer calibrations are traceable to the World Radiometric Reference. These different methods of calibration demonstrated 1% to 2% differences in solar irradiance measurement. Analyzing these values will ultimately assist in determining the uncertainties of the radiometer data and will assist in developing consensus on a standard for calibration.

  6. Evaluation of the energy dependence of ionization chambers pencil type calibrated beam tomography standards

    International Nuclear Information System (INIS)

    Fontes, Ladyjane Pereira; Potiens, Maria da Penha A.

    2015-01-01

    The Instrument Calibration Laboratory of IPEN (LCI - IPEN) performs calibrations of pencil-type ionization chambers (IC) used in measures of dosimetric survey on clinical systems of Computed Tomography (CT). Many users make mistakes when using a calibrated ionization chamber in their CT dosimetry systems. In this work a methodology for determination of factors of correction for quality (Kq) through the calibration curve that is specific for each ionization chamber was established. Furthermore, it was possible to demonstrate the energy dependence on an pencil-type Ionization Chamber(IC) calibrated at the LCI - IPEN. (author)

  7. Application of the BEPU methodology to assess fuel performance in dry storage

    International Nuclear Information System (INIS)

    Feria, F.; Herranz, L.E.

    2017-01-01

    Highlights: • Application of the BEPU methodology to estimate the cladding stress in dry storage. • The stress predicted is notably affected by the irradiation history. • Improvements of FGR modelling would significantly enhance the stress estimates. • The prediction uncertainty should not be disregarded when assessing clad integrity. - Abstract: The stress at which fuel cladding is submitted in dry storage is the driving force of the main degrading mechanisms postulated (i.e., embrittlement due to hydrides radial reorientation and creep). Therefore, a sound assessment is mandatory to reliably predict fuel performance under the dry storage prevailing conditions. Through fuel rod thermo-mechanical codes, best estimate calculations can be conducted. Precision of predictions depends on uncertainties affecting the way of calculating the stress, so by using uncertainty analysis an upper bound of stress can be determined and compared to safety limits set. The present work shows the application of the BEPU (Best Estimate Plus Uncertainty) methodology in this field. Concretely, hydrides radial reorientation has been assessed based on stress predictions under challenging thermal conditions (400 °C) and a stress limit of 90 MPa. The computational tools used to do that are FRAPCON-3xt (best estimate) and Dakota (uncertainty analysis). The methodology has been applied to a typical PWR fuel rod highly irradiated (65 GWd/tU) at different power histories. The study performed allows concluding that both the power history and the prediction uncertainty should not be disregarded when fuel rod integrity is evaluated in dry storage. On probabilistic bases, a burnup of 60 GWd/tU is found out as an acceptable threshold even in the most challenging irradiation conditions considered.

  8. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  9. Development of performance assessment methodology for establishment of quantitative acceptance criteria of near-surface radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Kim, C. R.; Lee, E. Y.; Park, J. W.; Chang, G. M.; Park, H. Y.; Yeom, Y. S. [Korea Hydro and Nuclear Power Co., Ltd., Seoul (Korea, Republic of)

    2002-03-15

    The contents and the scope of this study are as follows : review of state-of-the-art on the establishment of waste acceptance criteria in foreign near-surface radioactive waste disposal facilities, investigation of radiological assessment methodologies and scenarios, investigation of existing models and computer codes used in performance/safety assessment, development of a performance assessment methodology(draft) to derive quantitatively radionuclide acceptance criteria of domestic near-surface disposal facility, preliminary performance/safety assessment in accordance with the developed methodology.

  10. Causality analysis in business performance measurement system using system dynamics methodology

    Science.gov (United States)

    Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah

    2014-07-01

    One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.

  11. Preliminary Evaluation Methodology of ECCS Performance for Design Basis LOCA Redefinition

    International Nuclear Information System (INIS)

    Kang, Dong Gu; Ahn, Seung Hoon; Seul, Kwang Won

    2010-01-01

    To improve their existing regulations, the USNRC has made efforts to develop the risk-informed and performance-based regulation (RIPBR) approaches. As a part of these efforts, the rule revision of 10CFR50.46 (ECCS Acceptance Criteria) is underway, considering some options for 4 categories of spectrum of break sizes, ECCS functional reliability, ECCS evaluation model, and ECCS acceptance criteria. Since the potential for safety benefits and unnecessary burden reduction from design basis LOCA redefinition is high relative to other options, the USNRC is proceeding with the rulemaking for design basis LOCA redefinition. An instantaneous break with a flow rate equivalent to a double ended guillotine break (DEGB) of the largest primary piping system in the plant is widely recognized as an extremely unlikely event, while redefinition of design basis LOCA can affect the existing regulatory practices and approaches. In this study, the status of the design basis LOCA redefinition and OECD/NEA SMAP (Safety Margin Action Plan) methodology are introduced. Preliminary evaluation methodology of ECCS performance for LOCA is developed and discussed for design basis LOCA redefinition

  12. Methodology for evaluating gloves in relation to the effects on hand performance capabilities: a literature review.

    Science.gov (United States)

    Dianat, Iman; Haslegrave, Christine M; Stedmon, Alex W

    2012-01-01

    The present study was conducted to review the literature on the methods that have been considered appropriate for evaluation of the effects of gloves on different aspects of hand performance, to make recommendations for the testing and assessment of gloves, and to identify where further research is needed to improve the evaluation protocols. Eighty-five papers meeting the criteria for inclusion were reviewed. Many studies show that gloves may have negative effects on manual dexterity, tactile sensitivity, handgrip strength, muscle activity and fatigue and comfort, while further research is needed to determine glove effects on pinch strength, forearm torque strength and range of finger and wrist movements. The review also highlights several methodological issues (including consideration of both task type and duration of glove use by workers, guidance on the selection and allocation of suitable glove(s) for particular tasks/jobs, and glove design features) that need to be considered in future research. Practitioner Summary: The relevant literature on the effects of protective gloves on different aspects of hand performance was reviewed to make recommendations for the testing and assessment of gloves, and to improve evaluation protocols. The review highlights research areas and methodological issues that need to be considered in future research.

  13. In-flight calibration and performance evaluation of the fixed head star trackers for the solar maximum mission

    Science.gov (United States)

    Thompson, R. H.; Gambardella, P. J.

    1980-01-01

    The Solar Maximum Mission (SMM) spacecraft provides an excellent opportunity for evaluating attitude determination accuracies achievable with tracking instruments such as fixed head star trackers (FHSTs). As a part of its payload, SMM carries a highly accurate fine pointing Sun sensor (FPSS). The EPSS provides an independent check of the pitch and yaw parameters computed from observations of stars in the FHST field of view. A method to determine the alignment of the FHSTs relative to the FPSS using spacecraft data is applied. Two methods that were used to determine distortions in the 8 degree by 8 degree field of view of the FHSTs using spacecraft data are also presented. The attitude determination accuracy performance of the in flight calibrated FHSTs is evaluated.

  14. Use of Balanced Scorecard Methodology for Performance Measurement of the Health Extension Program in Ethiopia.

    Science.gov (United States)

    Teklehaimanot, Hailay D; Teklehaimanot, Awash; Tedella, Aregawi A; Abdella, Mustofa

    2016-05-04

    In 2004, Ethiopia introduced a community-based Health Extension Program to deliver basic and essential health services. We developed a comprehensive performance scoring methodology to assess the performance of the program. A balanced scorecard with six domains and 32 indicators was developed. Data collected from 1,014 service providers, 433 health facilities, and 10,068 community members sampled from 298 villages were used to generate weighted national, regional, and agroecological zone scores for each indicator. The national median indicator scores ranged from 37% to 98% with poor performance in commodity availability, workforce motivation, referral linkage, infection prevention, and quality of care. Indicator scores showed significant difference by region (P < 0.001). Regional performance varied across indicators suggesting that each region had specific areas of strength and deficiency, with Tigray and the Southern Nations, Nationalities and Peoples Region being the best performers while the mainly pastoral regions of Gambela, Afar, and Benishangul-Gumuz were the worst. The findings of this study suggest the need for strategies aimed at improving specific elements of the program and its performance in specific regions to achieve quality and equitable health services. © The American Society of Tropical Medicine and Hygiene.

  15. Development of performance assessment methodology for nuclear waste isolation in geologic media

    International Nuclear Information System (INIS)

    Bonano, E.J.; Chu, M.S.Y.; Cranwell, R.M.; Davis, P.A.

    1986-01-01

    The analysis of the processes involved in the burial of nuclear wastes can be performed only with reliable mathematical models and computer codes as opposed to conducting experiments because the time scales associated are on the order of tens of thousands of years. These analyses are concerned primarily with the migration of radioactive contaminants from the repository to the environment accessible to humans. Modeling of this phenomenon depends on a large number of other phenomena taking place in the geologic porous and/or fractured medium. These are ground-water flow, physicochemical interactions of the contaminants with the rock, heat transfer, and mass transport. Once the radionuclides have reached the accessible environment, the pathways to humans and health effects are estimated. A performance assessment methodology for a potential high-level waste repository emplaced in a basalt formation has been developed for the US Nuclear Regulatory Commission

  16. Methodological aspects of fuel performance system analysis at raw hydrocarbon processing plants

    Science.gov (United States)

    Kulbjakina, A. V.; Dolotovskij, I. V.

    2018-01-01

    The article discusses the methodological aspects of fuel performance system analysis at raw hydrocarbon (RH) processing plants. Modern RH processing facilities are the major consumers of energy resources (ER) for their own needs. To reduce ER, including fuel consumption, and to develop rational fuel system structure are complex and relevant scientific tasks that can only be done using system analysis and complex system synthesis. In accordance with the principles of system analysis, the hierarchical structure of the fuel system, the block scheme for the synthesis of the most efficient alternative of the fuel system using mathematical models and the set of performance criteria have been developed on the main stages of the study. The results from the introduction of specific engineering solutions to develop their own energy supply sources for RH processing facilities have been provided.

  17. Catalytic Reforming: Methodology and Process Development for a Constant Optimisation and Performance Enhancement

    Directory of Open Access Journals (Sweden)

    Avenier Priscilla

    2016-05-01

    Full Text Available Catalytic reforming process has been used to produce high octane gasoline since the 1940s. It would appear to be an old process that is well established and for which nothing new could be done. It is however not the case and constant improvements are proposed at IFP Energies nouvelles. With a global R&D approach using new concepts and forefront methodology, IFPEN is able to: propose a patented new reactor concept, increasing capacity; ensure efficiency and safety of mechanical design for reactor using modelization of the structure; develop new catalysts to increase process performance due to a high comprehension of catalytic mechanism by using, an experimental and innovative analytical approach (119Sn Mössbauer and X-ray absorption spectroscopies and also a Density Functional Theory (DFT calculations; have efficient, reliable and adapted pilots to validate catalyst performance.

  18. ATLAS tile calorimeter data quality assessment and performance with calibration, cosmic and first beam data

    International Nuclear Information System (INIS)

    Volpi, Matteo

    2010-01-01

    The commissioning of the barrel hadronic calorimeter (Tile) of the ATLAS detector at the Large Hadron Collider (LHC) has been the focus of an extensive project over the last several years. Work with Tile has resulted in a fully operational detector before the first LHC beam test on 10 September 2008. A set of tools has been developed spanning from the hardware and software systems of the detector and online monitoring to the offline reconstruction. This set of tools constitutes the final Tile data quality system and is highly integrated with all ATLAS online and offline frameworks. A review of the final data quality system of the Tile hadronic calorimeter will be presented together with selected results on hardware reliability. This will be followed by the detector performance checks performed on cosmic data and on the first LHC beam data taken on 10 September 2008.

  19. Introduction on performance analysis and profiling methodologies for KVM on ARM virtualization

    Science.gov (United States)

    Motakis, Antonios; Spyridakis, Alexander; Raho, Daniel

    2013-05-01

    The introduction of hardware virtualization extensions on ARM Cortex-A15 processors has enabled the implementation of full virtualization solutions for this architecture, such as KVM on ARM. This trend motivates the need to quantify and understand the performance impact, emerged by the application of this technology. In this work we start looking into some interesting performance metrics on KVM for ARM processors, which can provide us with useful insight that may lead to potential improvements in the future. This includes measurements such as interrupt latency and guest exit cost, performed on ARM Versatile Express and Samsung Exynos 5250 hardware platforms. Furthermore, we discuss additional methodologies that can provide us with a deeper understanding in the future of the performance footprint of KVM. We identify some of the most interesting approaches in this field, and perform a tentative analysis on how these may be implemented in the KVM on ARM port. These take into consideration hardware and software based counters for profiling, and issues related to the limitations of the simulators which are often used, such as the ARM Fast Models platform.

  20. A new methodology for assessment of the performance of heartbeat classification systems

    Directory of Open Access Journals (Sweden)

    Hool Livia C

    2008-01-01

    Full Text Available Abstract Background The literature presents many different algorithms for classifying heartbeats from ECG signals. The performance of the classifier is normally presented in terms of sensitivity, specificity or other metrics describing the proportion of correct versus incorrect beat classifications. From the clinician's point of view, such metrics are however insufficient to rate the performance of a classifier. Methods We propose a new methodology for the presentation of classifier performance, based on Bayesian classification theory. Our proposition lets the investigators report their findings in terms of beat-by-beat comparisons, and defers the role of assessing the utility of the classifier to the statistician. Evaluation of the classifier's utility must be undertaken in conjunction with the set of relative costs applicable to the clinicians' application. Such evaluation produces a metric more tuned to the specific application, whilst preserving the information in the results. Results By way of demonstration, we propose a set of costs, based on clinical data from the literature, and examine the results of two published classifiers using our method. We make recommendations for reporting classifier performance, such that this method can be used for subsequent evaluation. Conclusion The proportion of misclassified beats contains insufficient information to fully evaluate a classifier. Performance reports should include a table of beat-by-beat comparisons, showing not-only the number of misclassifications, but also the identity of the classes involved in each inaccurate classification.

  1. How Is Working Memory Training Likely to Influence Academic Performance? Current Evidence and Methodological Considerations.

    Science.gov (United States)

    Bergman Nutley, Sissela; Söderqvist, Stina

    2017-01-01

    Working memory (WM) is one of our core cognitive functions, allowing us to keep information in mind for shorter periods of time and then work with this information. It is the gateway that information has to pass in order to be processed consciously. A well-functioning WM is therefore crucial for a number of everyday activities including learning and academic performance (Gathercole et al., 2003; Bull et al., 2008), which is the focus of this review. Specifically, we will review the research investigating whether improving WM capacity using Cogmed WM training can lead to improvements on academic performance. Emphasis is given to reviewing the theoretical principles upon which such investigations rely, in particular the complex relation between WM and mathematical and reading abilities during development and how these are likely to be influenced by training. We suggest two possible routes in which training can influence academic performance, one through an effect on learning capacity which would thus be evident with time and education, and one through an immediate effect on performance on reading and mathematical tasks. Based on the theoretical complexity described we highlight some methodological issues that are important to take into consideration when designing and interpreting research on WM training and academic performance, but that are nonetheless often overlooked in the current research literature. Finally, we will provide some suggestions for future research for advancing the understanding of WM training and its potential role in supporting academic attainment.

  2. Design, performance, and calibration of CMS hadron-barrel calorimeter wedges

    International Nuclear Information System (INIS)

    Abdullin, S.; Abramov, V.; Goncharov, P.; Khmelnikov, A.; Korablev, A.; Korneev, Y.; Krinitsyn, A.; Kryshkin, V.; Lukanin, V.; Pikalov, V.; Ryazanov, A.; Talov, V.; Turchanovich, L.; Volkov, A.; Acharya, B.; Banerjee, S.; Banerjee, S.; Chendvankar, S.; Dugad, S.; Kalmani, S.; Katta, S.; Mazumdar, K.; Mondal, N.; Nagaraj, P.; Patil, M.; Reddy, L.; Satyanarayana, B.; Sudhakar, K.; Verma, P.; Adams, M.; Burchesky, K.; Qian, W.; Akchurin, N.; Carrell, K.; Guemues, K.; Thomas, R.; Akgun, U.; Ayan, S.; Duru, F.; Merlo, J.P.; Mestvirishvili, A.; Miller, M.; Norbeck, E.; Olson, J.; Onel, Y.; Schmidt, I.; Anderson, E.W.; Hauptman, J.; Antchev, G.; Hazen, E.; Lawlor, C.; Machado, E.; Posch, C.; Rohlf, J.; Wu, S.X.; Aydin, S.; Dumanoglu, I.; Eskut, E.; Kayis-Topaksu, A.; Polatoz, A.; Onengut, G.; Ozdes-Koca, N.; Baarmand, M.; Ralich, R.; Vodopiyanov, I.; Baden, D.; Bard, R.; Eno, S.; Grassi, T.; Jarvis, C.; Kellogg, R.; Kunori, S.; Skuja, A.; Barnes, V.; Laasanen, A.; Pompos, A.; Bawa, H.; Beri, S.; Bhatnagar, V.; Kaur, M.; Kohli, J.; Kumar, A.; Singh, J.; Baiatian, G.; Sirunyan, A.; Bencze, G.; Vesztergombi, G.; Zalan, P.; Bodek, A.; Budd, H.; Chung, Y.; De Barbaro, P.; Haelen, T.; Camporesi, T.; Visser, T. de; Cankocak, K.; Cremaldi, L.; Reidy, J.; Sanders, D.A.; Cushman, P.; Sherwood, B.; Damgov, J.; Dimitrov, L.; Genchev, V.; Piperov, S.; Vankov, I.; Demianov, A.; Ershov, A.; Gribushin, A.; Kodolova, O.; Petrushanko, S.; Sarycheva, L.; Vardanyan, I.; Elias, J.; Elvira, D.; Freeman, J.; Green, D.; Los, S.; O'Dell, V.; Ronzhin, A.; Sergeyev, S.; Suzuki, I.; Vidal, R.; Whitmore, J.; Emeliantchik, I.; Massolov, V.; Shumeiko, N.; Stefanovich, R.; Fisher, W.; Tully, C.; Gavrilov, V.; Kaftanov, V.; Kisselevich, I.; Kolossov, V.; Krokhotin, A.; Kuleshov, S.; Stolin, V.; Ulyanov, A.; Gershtein, Y.; Golutvin, I.; Kalagin, V.; Kosarev, I.; Mescheryakov, G.; Smirnov, V.; Volodko, A.; Zarubin, A.; Grinev, B.; Lubinsky, V.; Senchishin, V.; Guelmez, E.; Hagopian, S.; Hagopian, V.; Johnson, K.; Heering, A.; Imboden, M.; Isiksal, E.; Karmgard, D.; Ruchti, R.; Kaya, M.; Lazic, D.; Levchuk, L.; Sorokin, P.; Litvintsev, D.; Litov, L.; Mans, J.; Ozkorucuklu, S.; Ozok, F.; Serin-Zeyrek, M.; Sever, R.; Zeyrek, M.; Paktinat, S.; Podrasky, V.; Sanzeni, C.; Winn, D.; Vlassov, E.

    2008-01-01

    Extensive measurements have been made with pions, electrons and muons on four production wedges of the compact muon solenoid (CMS) hadron barrel (HB) calorimeter in the H2 beam line at CERN with particle momenta varying from 20 to 300 GeV/c. The time structure of the events was measured with the full chain of preproduction front-end electronics running at 34 MHz. Moving-wire radioactive source data were also collected for all scintillator layers in the HB. The energy dependent time slewing effect was measured and tuned for optimal performance. (orig.)

  3. Design, performance, and calibration of CMS hadron-barrel calorimeter wedges

    Energy Technology Data Exchange (ETDEWEB)

    Abdullin, S. [Fermi National Accelerator Lab., Batavia, IL (United States)]|[Univ. of Maryland, College Park, MD (United States); Abramov, V.; Goncharov, P.; Khmelnikov, A.; Korablev, A.; Korneev, Y.; Krinitsyn, A.; Kryshkin, V.; Lukanin, V.; Pikalov, V.; Ryazanov, A.; Talov, V.; Turchanovich, L.; Volkov, A. [IHEP, Protvino (Russian Federation); Acharya, B.; Banerjee, S.; Banerjee, S.; Chendvankar, S.; Dugad, S.; Kalmani, S.; Katta, S.; Mazumdar, K.; Mondal, N.; Nagaraj, P.; Patil, M.; Reddy, L.; Satyanarayana, B.; Sudhakar, K.; Verma, P. [Tata Inst. of Fundamental Research, Mumbai (India); Adams, M.; Burchesky, K.; Qian, W. [Univ. of Illinois at Chicago, Chicago, IL (United States); Akchurin, N.; Carrell, K.; Guemues, K.; Thomas, R. [Texas Tech Univ., Dept. of Physics, Lubbock, TX (United States); Akgun, U.; Ayan, S.; Duru, F.; Merlo, J.P.; Mestvirishvili, A.; Miller, M.; Norbeck, E.; Olson, J.; Onel, Y.; Schmidt, I. [Univ. of Iowa, Iowa City, IA (United States); Anderson, E.W.; Hauptman, J. [Iowa State Univ., Ames, IA (United States); Antchev, G.; Hazen, E.; Lawlor, C.; Machado, E.; Posch, C.; Rohlf, J.; Wu, S.X. [Boston Univ., Boston, MA (United States); Aydin, S.; Dumanoglu, I.; Eskut, E.; Kayis-Topaksu, A.; Polatoz, A.; Onengut, G.; Ozdes-Koca, N. [Cukurova Univ., Adana (Turkey); Baarmand, M.; Ralich, R.; Vodopiyanov, I. [Florida Inst. of Technology, Melbourne, FL (United States); Baden, D.; Bard, R.; Eno, S.; Grassi, T.; Jarvis, C.; Kellogg, R.; Kunori, S.; Skuja, A. [Univ. of Maryland, College Park, MD (United States); Barnes, V.; Laasanen, A.; Pompos, A. [Purdue Univ., West Lafayette, IN (United States); Bawa, H.; Beri, S.; Bhatnagar, V.; Kaur, M.; Kohli, J.; Kumar, A.; Singh, J. [Panjab Univ., Chandigarh (India); Baiatian, G.; Sirunyan, A. [Yerevan Physics Inst., Yerevan (Armenia); Bencze, G.; Vesztergombi, G.; Zalan, P. [KFKI-RMKI, Research Inst. for Particle and Nuclear Physics, Budapest (Hungary)] [and others

    2008-05-15

    Extensive measurements have been made with pions, electrons and muons on four production wedges of the compact muon solenoid (CMS) hadron barrel (HB) calorimeter in the H2 beam line at CERN with particle momenta varying from 20 to 300 GeV/c. The time structure of the events was measured with the full chain of preproduction front-end electronics running at 34 MHz. Moving-wire radioactive source data were also collected for all scintillator layers in the HB. The energy dependent time slewing effect was measured and tuned for optimal performance. (orig.)

  4. A methodology to determine the power performance of wave energy converters at a particular coastal location

    International Nuclear Information System (INIS)

    Carballo, R.; Iglesias, G.

    2012-01-01

    Highlights: ► We develop a method to accurately compute the power output of a WEC at a site. ► The analysis of the wave resource is integrated seamlessly with the WEC efficiency. ► The intra-annual variability of the resource is considered. ► The method is illustrated with a case study: a WEC projected to be built in Spain. - Abstract: The assessment of the power performance of a wave energy converter (WEC) at a given site involves two tasks: (i) the characterisation of the wave resource at the site in question, and (ii) the computation of its power performance. These tasks are generally seen as disconnected, and tackled as such; they are, however, deeply interrelated – so much so that they should be treated as two phases of the same procedure. Indeed, beyond the characterisation of the wave resource of a certain area lies a crucial question: how much power would a WEC installed in that area output to the network? This work has two main objectives. First, to develop a methodology that integrates both tasks seamlessly and guarantees the accurate computation of the power performance of a WEC installed at a site of interest; it involves a large dataset of deepwater records and the implementation of a high-resolution, nested spectral model, which is used to propagate 95% of the total offshore wave energy to the WEC site. The second objective is to illustrate this methodology with a case study: an Oscillating Water Column (OWC) projected to be constructed at the breakwater of A Guarda (NW Spain). It is found that the approach presented allows to accurately determine the power that the WEC will output to the network, and that this power exhibits a significant monthly variability, so an estimate of the energy production based on mean annual values may be misleading.

  5. Calibration of alpha surface contamination monitor

    International Nuclear Information System (INIS)

    Freitas, I.S.M. de; Goncalez, O.L.

    1990-01-01

    In this work, the results, as well as the methodology, of the calibration of an alpha surface contamination monitor are presented. The calibration factors are obtained by least-squares fitting with effective variance. (author)

  6. Evaluation of frother performance in coal flotation: A critical review of existing methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, S.; Dey, S. [Indian School for Mines, Dhanbad (India). Dept. for Fuel & Mineral Engineering

    2008-07-01

    Separation efficiency in flotation depends, to a considerable extent, on the efficiency of the frother used. A successful frother must achieve a delicate balance between froth stability and non-persistency. Ideally, the frother is not supposed to influence the state of the surface of the coal and minerals. In practice, however, interaction does occur between the frother, other reagents, and solid surfaces. Various commercially available frothers can differ slightly or significantly in their influence on the flotation results. Therefore, a plant operator is in a dilemma when it comes to selecting a frother to be used in his plant. This article attempts to critically review the different methodologies, which are available to compare the performance of two or more frothers in order to decide which would best serve the purpose of the plant operator.

  7. A methodology for performing virtual measurements in a nuclear reactor system

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Uhrig, R.E.; Tsoukalas, L.H.

    1992-01-01

    A novel methodology is presented for monitoring nonphysically measurable variables in an experimental nuclear reactor. It is based on the employment of artificial neural networks to generate fuzzy values. Neural networks map spatiotemporal information (in the form of time series) to algebraically defined membership functions. The entire process can be thought of as a virtual measurement. Through such virtual measurements the values of nondirectly monitored parameters with operational significance, e.g., transient-type, valve-position, or performance, can be determined. Generating membership functions is a crucial step in the development and practical utilization of fuzzy reasoning, a computational approach that offers the advantage of describing the state of the system in a condensed, linguistic form, convenient for monitoring, diagnostics, and control algorithms

  8. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  9. Relative efficiency calibration between two silicon drift detectors performed with a monochromatized X-ray generator over the 0.1-1.5 keV range

    Science.gov (United States)

    Hubert, S.; Boubault, F.

    2018-03-01

    In this article, we present the first X-ray calibration performed over the 0.1-1.5 keV spectral range by means of a soft X-ray Manson source and the monochromator SYMPAX. This monochromator, based on a classical Rowland geometry, presents the novelty to be able to board simultaneously two detectors and move them under vacuum in front of the exit slit of the monochromatizing stage. This provides the great advantage to perform radiometric measurements of the monochromatic X-ray photon flux with one reference detector while calibrating another X-ray detector. To achieve this, at least one secondary standard must be operated with SYMPAX. This paper presents thereby an efficiency transfer experiment between a secondary standard silicon drift detector (SDD), previously calibrated on BESSY II synchrotron Facility, and another one ("unknown" SDD), devoted to be used permanently with SYMPAX. The associated calibration process is described as well as corresponding results. Comparison with calibrated measurements performed at the Physikalisch-Technische Bundesanstalt (PTB) Radiometric Laboratory shows a very good agreement between the secondary standard and the unknown SDD.

  10. Test Methodologies for Hydrogen Sensor Performance Assessment: Chamber vs. Flow Through Test Apparatus: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Buttner, William J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hartmann, Kevin S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Schmidt, Kara [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cebolla, Rafeal O [Joint Research Centre, Petten, the Netherlands; Weidner, Eveline [Joint Research Centre, Petten, the Netherlands; Bonato, Christian [Joint Research Centre, Petten, the Netherlands

    2017-11-06

    Certification of hydrogen sensors to standards often prescribes using large-volume test chambers [1, 2]. However, feedback from stakeholders such as sensor manufacturers and end-users indicate that chamber test methods are often viewed as too slow and expensive for routine assessment. Flow through test methods potentially are an efficient, cost-effective alternative for sensor performance assessment. A large number of sensors can be simultaneously tested, in series or in parallel, with an appropriate flow through test fixture. The recent development of sensors with response times of less than 1s mandates improvements in equipment and methodology to properly capture the performance of this new generation of fast sensors; flow methods are a viable approach for accurate response and recovery time determinations, but there are potential drawbacks. According to ISO 26142 [1], flow through test methods may not properly simulate ambient applications. In chamber test methods, gas transport to the sensor can be dominated by diffusion which is viewed by some users as mimicking deployment in rooms and other confined spaces. Alternatively, in flow through methods, forced flow transports the gas to the sensing element. The advective flow dynamics may induce changes in the sensor behaviour relative to the quasi-quiescent condition that may prevail in chamber test methods. One goal of the current activity in the JRC and NREL sensor laboratories [3, 4] is to develop a validated flow through apparatus and methods for hydrogen sensor performance testing. In addition to minimizing the impact on sensor behaviour induced by differences in flow dynamics, challenges associated with flow through methods include the ability to control environmental parameters (humidity, pressure and temperature) during the test and changes in the test gas composition induced by chemical reactions with upstream sensors. Guidelines on flow through test apparatus design and protocols for the evaluation of

  11. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    Science.gov (United States)

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  12. Study on dynamic team performance evaluation methodology based on team situation awareness model

    International Nuclear Information System (INIS)

    Kim, Suk Chul

    2005-02-01

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  13. Study on dynamic team performance evaluation methodology based on team situation awareness model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk Chul

    2005-02-15

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  14. Analysis of calibrated seafloor backscatter for habitat classification methodology and case study of 158 spots in the Bay of Biscay and Celtic Sea

    Science.gov (United States)

    Fezzani, Ridha; Berger, Laurent

    2018-06-01

    An automated signal-based method was developed in order to analyse the seafloor backscatter data logged by calibrated multibeam echosounder. The processing consists first in the clustering of each survey sub-area into a small number of homogeneous sediment types, based on the backscatter average level at one or several incidence angles. Second, it uses their local average angular response to extract discriminant descriptors, obtained by fitting the field data to the Generic Seafloor Acoustic Backscatter parametric model. Third, the descriptors are used for seafloor type classification. The method was tested on the multi-year data recorded by a calibrated 90-kHz Simrad ME70 multibeam sonar operated in the Bay of Biscay, France and Celtic Sea, Ireland. It was applied for seafloor-type classification into 12 classes, to a dataset of 158 spots surveyed for demersal and benthic fauna study and monitoring. Qualitative analyses and classified clusters using extracted parameters show a good discriminatory potential, indicating the robustness of this approach.

  15. Optimization of procedure for calibration with radiometer/photometer

    International Nuclear Information System (INIS)

    Detilly, Isabelle

    2009-01-01

    A test procedure for the radiometer/photometer calibrations mark International Light at the Laboratorio de Fotometria y Tecnologia Laser (LAFTA) de la Escuela de Ingenieria Electrica de la Universidad de Costa Rica is established. Two photometric banks are used as experimental set and two calibrations were performed of the International Light. A basic procedure established in the laboratory, is used for calibration from measurements of illuminance and luminous intensity. Some dependent variations of photometric banks used in the calibration process, the programming of the radiometer/photometer and the applied methodology showed the results. The procedure for calibration with radiometer/photometer can be improved by optimizing the programming process of the measurement instrument and possible errors can be minimized by using the recommended procedure. (author) [es

  16. A new scaling methodology for NO(x) emissions performance of gas burners and furnaces

    Science.gov (United States)

    Hsieh, Tse-Chih

    1997-11-01

    A general burner and furnace scaling methodology is presented, together with the resulting scaling model for NOsb{x} emissions performance of a broad class of swirl-stabilized industrial gas burners. The model is based on results from a set of novel burner scaling experiments on a generic gas burner and furnace design at five different scales having near-uniform geometric, aerodynamic, and thermal similarity and uniform measurement protocols. These provide the first NOsb{x} scaling data over the range of thermal scales from 30 kW to 12 MW, including input-output measurements as well as detailed in-flame measurements of NO, NOsb{x}, CO, Osb2, unburned hydrocarbons, temperature, and velocities at each scale. The in-flame measurements allow identification of key sources of NOsb{x} production. The underlying physics of these NOsb{x} sources lead to scaling laws for their respective contributions to the overall NOsb{x} emissions performance. It is found that the relative importance of each source depends on the burner scale and operating conditions. Simple furnace residence time scaling is shown to be largely irrelevant, with NOsb{x} emissions instead being largely controlled by scaling of the near-burner region. The scalings for these NOsb{x} sources are combined in a comprehensive scaling model for NOsb{x} emission performance. Results from the scaling model show good agreement with experimental data at all burner scales and over the entire range of turndown, staging, preheat, and excess air dilution, with correlations generally exceeding 90%. The scaling model permits design trade-off assessments for a broad class of burners and furnaces, and allows performance of full industrial scale burners and furnaces of this type to be inferred from results of small scale tests.

  17. Site Calibration report

    DEFF Research Database (Denmark)

    Yordanova, Ginka; Vesth, Allan

    The report describes site calibration measurements carried out on a site in Denmark. The measurements are carried out in accordance to Ref. [1]. The site calibration is carried out before a power performance measurement on a given turbine to clarify the influence from the terrain on the ratio...

  18. Performance-based, cost- and time-effective PCB analytical methodology

    International Nuclear Information System (INIS)

    Alvarado, J. S.

    1998-01-01

    Laboratory applications for the analysis of PCBs (polychlorinated biphenyls) in environmental matrices such as soil/sediment/sludge and oil/waste oil were evaluated for potential reduction in waste, source reduction, and alternative techniques for final determination. As a consequence, new procedures were studied for solvent substitution, miniaturization of extraction and cleanups, minimization of reagent consumption, reduction of cost per analysis, and reduction of time. These new procedures provide adequate data that meet all the performance requirements for the determination of PCBs. Use of the new procedures reduced costs for all sample preparation techniques. Time and cost were also reduced by combining the new sample preparation procedures with the power of fast gas chromatography. Separation of Aroclor 1254 was achieved in less than 6 min by using DB-1 and SPB-608 columns. With the greatly shortened run times, reproducibility can be tested quickly and consequently with low cost. With performance-based methodology, the applications presented here can be applied now, without waiting for regulatory approval

  19. Development of performance assessment methodology for nuclear waste isolation in geologic media

    International Nuclear Information System (INIS)

    Bonano, E.J.; Chu, M.S.Y.; Cranwell, R.M.; Davis, P.A.

    1985-01-01

    The burial of nuclear wastes in deep geologic formations as a means for their disposal is an issue of significant technical and social impact. The analysis of the processes involved can be performed only with reliable mathematical models and computer codes as opposed to conducting experiments because the time scales associated are on the order of tens of thousands of years. These analyses are concerned primarily with the migration of radioactive contaminants from the repository to the environment accessible to humans. Modeling of this phenomenon depends on a large number of other phenomena taking place in the geologic porous and/or fractured medium. These are gound-water flow, physicochemical interactions of the contaminants with the rock, heat transfer, and mass transport. Once the radionuclides have reached the accessible environment, the pathways to humans and health effects are estimated. A performance assessment methodology for a potential high-level waste repository emplaced in a basalt formation has been developed for the US Nuclear Regulatory Commission. The approach followed consists of a description of the overall system (waste, facility, and site), scenario selection and screening, consequence modeling (source term, ground-water flow, radionuclide transport, biosphere transport, and health effects), and uncertainty and sensitivity analysis

  20. Development of performance assessment methodology for nuclear waste isolation in geologic media

    Science.gov (United States)

    Bonano, E. J.; Chu, M. S. Y.; Cranwell, R. M.; Davis, P. A.

    The burial of nuclear wastes in deep geologic formations as a means for their disposal is an issue of significant technical and social impact. The analysis of the processes involved can be performed only with reliable mathematical models and computer codes as opposed to conducting experiments because the time scales associated are on the order of tens of thousands of years. These analyses are concerned primarily with the migration of radioactive contaminants from the repository to the environment accessible to humans. Modeling of this phenomenon depends on a large number of other phenomena taking place in the geologic porous and/or fractured medium. These are ground-water flow, physicochemical interactions of the contaminants with the rock, heat transfer, and mass transport. Once the radionuclides have reached the accessible environment, the pathways to humans and health effects are estimated. A performance assessment methodology for a potential high-level waste repository emplaced in a basalt formation has been developed for the U.S. Nuclear Regulatory Commission.

  1. Calibration factor or calibration coefficient?

    International Nuclear Information System (INIS)

    Meghzifene, A.; Shortt, K.R.

    2002-01-01

    Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)

  2. The GERDA calibration system

    Energy Technology Data Exchange (ETDEWEB)

    Baudis, Laura; Froborg, Francis; Tarka, Michael; Bruch, Tobias; Ferella, Alfredo [Physik-Institut, Universitaet Zuerich (Switzerland); Collaboration: GERDA-Collaboration

    2012-07-01

    A system with three identical custom made units is used for the energy calibration of the GERDA Ge diodes. To perform a calibration the {sup 228}Th sources are lowered from the parking positions at the top of the cryostat. Their positions are measured by two independent modules. One, the incremental encoder, counts the holes in the perforated steel band holding the sources, the other measures the drive shaft's angular position even if not powered. The system can be controlled remotely by a Labview program. The calibration data is analyzed by an iterative calibration algorithm determining the calibration functions for different energy reconstruction algorithms and the resolution of several peaks in the {sup 228}Th spectrum is determined. A Monte Carlo simulation using the GERDA simulation software MAGE has been performed to determine the background induced by the sources in the parking positions.

  3. Methodological guide for the follow-up and elaboration of performance assessments of a methanization plant

    International Nuclear Information System (INIS)

    Bastide, Guillaume

    2014-06-01

    This guide aims at giving indications required for a good implementation of an exploitation follow-up of a methanization plant. More precisely, it aims at foreseeing equipment necessary to the follow-up of installation construction, at preparing the operator to the follow-up and command of his installation, at elaborating operation assessments and performance interpretations, at proposing solutions and/or improvements. The described follow-up process can be applied to all the process stages (from receipt to by-product valorization), and addresses technical as well as economic aspects. Thus, four types of assessments are made: technical, energetic, environmental, and social-economic. This guide comprises five parts: a presentation of follow-up objectives (information to be looked for, benefits and drawbacks, follow-up level to be implemented), the follow-up methodology, follow-up assessments (what they are and how to elaborate them), practical sheets (practical presentation of techniques, typical Excel spreadsheets), and a glossary which explains the main technical terms

  4. A High-Performance Embedded Hybrid Methodology for Uncertainty Quantification With Applications

    Energy Technology Data Exchange (ETDEWEB)

    Iaccarino, Gianluca

    2014-04-01

    Multiphysics processes modeled by a system of unsteady di erential equations are natu- rally suited for partitioned (modular) solution strategies. We consider such a model where probabilistic uncertainties are present in each module of the system and represented as a set of random input parameters. A straightforward approach in quantifying uncertainties in the predicted solution would be to sample all the input parameters into a single set, and treat the full system as a black-box. Although this method is easily parallelizable and requires minimal modi cations to deterministic solver, it is blind to the modular structure of the underlying multiphysical model. On the other hand, using spectral representations polynomial chaos expansions (PCE) can provide richer structural information regarding the dynamics of these uncertainties as they propagate from the inputs to the predicted output, but can be prohibitively expensive to implement in the high-dimensional global space of un- certain parameters. Therefore, we investigated hybrid methodologies wherein each module has the exibility of using sampling or PCE based methods of capturing local uncertainties while maintaining accuracy in the global uncertainty analysis. For the latter case, we use a conditional PCE model which mitigates the curse of dimension associated with intru- sive Galerkin or semi-intrusive Pseudospectral methods. After formalizing the theoretical framework, we demonstrate our proposed method using a numerical viscous ow simulation and benchmark the performance against a solely Monte-Carlo method and solely spectral method.

  5. Experience and benefits from using the EPRI MOV Performance Prediction Methodology in nuclear power plants

    International Nuclear Information System (INIS)

    Walker, T.; Damerell, P.S.

    1999-01-01

    The EPRI MOV Performance Prediction Methodology (PPM) is an effective tool for evaluating design basis thrust and torque requirements for MOVs. Use of the PPM has become more widespread in US nuclear power plants as they close out their Generic Letter (GL) 89-10 programs and address MOV periodic verification per GL 96-05. The PPM has also been used at plants outside the US, many of which are implementing programs similar to US plants' GL 89-10 programs. The USNRC Safety Evaluation of the PPM and the USNRC's discussion of the PPM in GL 96-05 make the PPM an attractive alternative to differential pressure (DP) testing, which can be costly and time-consuming. Significant experience and benefits, which are summarized in this paper, have been gained using the PPM. Although use of PPM requires a commitment of resources, the benefits of a solidly justified approach and a reduced need for DP testing provide a substantial safety and economic benefit. (author)

  6. Model development and optimization of operating conditions to maximize PEMFC performance by response surface methodology

    International Nuclear Information System (INIS)

    Kanani, Homayoon; Shams, Mehrzad; Hasheminasab, Mohammadreza; Bozorgnezhad, Ali

    2015-01-01

    Highlights: • The optimization of the operating parameters in a serpentine PEMFC is done using RSM. • The RSM model can predict the cell power over the wide range of operating conditions. • St-An, St-Ca and RH-Ca have an optimum value to obtain the best performance. • The interactions of the operating conditions affect the output power significantly. • The cathode and anode stoichiometry are the most effective parameters on the power. - Abstract: Optimization of operating conditions to obtain maximum power in PEMFCs could have a significant role to reduce the costs of this emerging technology. In the present experimental study, a single serpentine PEMFC is used to investigate the effects of operating conditions on the electrical power production of the cell. Four significant parameters including cathode stoichiometry, anode stoichiometry, gases inlet temperature, and cathode relative humidity are studied using Design of Experiment (DOE) to obtain an optimal power. Central composite second order Response Surface Methodology (RSM) is used to model the relationship between goal function (power) and considered input parameters (operating conditions). Using this statistical–mathematical method leads to obtain a second-order equation for the cell power. This model considers interactions and quadratic effects of different operating conditions and predicts the maximum or minimum power production over the entire working range of the parameters. In this range, high stoichiometry of cathode and low stoichiometry of anode results in the minimum cell power and contrary the medium range of fuel and oxidant stoichiometry leads to the maximum power. Results show that there is an optimum value for the anode stoichiometry, cathode stoichiometry and relative humidity to reach the best performance. The predictions of the model are evaluated by experimental tests and they are in a good agreement for different ranges of the parameters

  7. Genetic algorithm for building envelope calibration

    International Nuclear Information System (INIS)

    Ramos Ruiz, Germán; Fernández Bandera, Carlos; Gómez-Acebo Temes, Tomás; Sánchez-Ostiz Gutierrez, Ana

    2016-01-01

    Highlights: • Calibration methodology using Multi-Objective Genetic Algorithm (NSGA-II). • Uncertainty analysis formulas implemented directly in EnergyPlus. • The methodology captures the heat dynamic of the building with a high level of accuracy. • Reduction in the number of parameters involved due to sensitivity analysis. • Cost-effective methodology using temperature sensors only. - Abstract: Buildings today represent 40% of world primary energy consumption and 24% of greenhouse gas emissions. In our society there is growing interest in knowing precisely when and how energy consumption occurs. This means that consumption measurement and verification plans are well-advanced. International agencies such as Efficiency Valuation Organization (EVO) and International Performance Measurement and Verification Protocol (IPMVP) have developed methodologies to quantify savings. This paper presents a methodology to accurately perform automated envelope calibration under option D (calibrated simulation) of IPMVP – vol. 1. This is frequently ignored because of its complexity, despite being more flexible and accurate in assessing the energy performance of a building. A detailed baseline energy model is used, and by means of a metaheuristic technique achieves a highly reliable and accurate Building Energy Simulation (BES) model suitable for detailed analysis of saving strategies. In order to find this BES model a Genetic Algorithm (NSGA-II) is used, together with a highly efficient engine to stimulate the objective, thus permitting rapid achievement of the goal. The result is a BES model that broadly captures the heat dynamic behaviour of the building. The model amply fulfils the parameters demanded by ASHRAE and EVO under option D.

  8. Use of calibration methodology of gamma cameras for the workers surveillance using a thyroid simulator; Uso de una metodologia de calibracion de camaras gamma para la vigilancia de trabajadores usando un simulador de tiroides

    Energy Technology Data Exchange (ETDEWEB)

    Alfaro, M.; Molina, G.; Vazquez, R.; Garcia, O., E-mail: mercedes.alfaro@inin.gob.m [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2010-09-15

    In Mexico there are a significant number of nuclear medicine centers in operation. For what the accidents risk related to the transport and manipulation of open sources used in nuclear medicine can exist. The National Institute of Nuclear Research (ININ) has as objective to establish a simple and feasible methodology for the workers surveillance related with the field of the nuclear medicine. This radiological surveillance can also be applied to the public in the event of a radiological accident. To achieve this it intends to use the available equipment s in the nuclear medicine centers, together with the neck-thyroid simulators elaborated by the ININ to calibrate the gamma cameras. The gamma cameras have among their component elements that conform spectrometric systems like the employees in the evaluation of the internal incorporation for direct measurements, reason why, besides their use for diagnostic for image, they can be calibrated with anthropomorphic simulators and also with punctual sources for the quantification of the radionuclides activity distributed homogeneously in the human body, or located in specific organs. Inside the project IAEA-ARCAL-RLA/9/049-LXXVIII -Procedures harmonization of internal dosimetry- where 9 countries intervened (Argentina, Brazil, Colombia, Cuba, Chile, Mexico, Peru, Uruguay and Spain). It was developed a protocol of cameras gamma calibration for the determination in vivo of radionuclides. The protocol is the base to establish and integrated network in Latin America to attend in response to emergencies, using nuclear medicine centers of public hospitals of the region. The objective is to achieve the appropriate radiological protection of the workers, essential for the sure and acceptable radiation use, the radioactive materials and the nuclear energy. (Author)

  9. Noninvasive method for the calibration of the peak voltage (kVp) meters

    International Nuclear Information System (INIS)

    Macedo, E.M.; Navarro, M.V.T.; Pereira, L.; Garcia, I.F.M.; Navarro, V.C.C.

    2015-01-01

    Quality control in diagnostic radiology is one of the mechanisms that minimize radiation exposure, and the measurement of tube voltage is one of the main test in these procedures. So, the calibration of non-invasive tube voltage meters is essential to maintain the metrological reliability of quality control tests. Thus, this work describes the implementation of the calibration methodology of the quantity tube peak voltage by the substitution method, using non-invasive standard meter, at LABPROSAUD-IFBA. The results showed great performance and when compared with calibrations by invasive methods, showed maximum difference of 4%, contemplated in the uncertainty ranges of the calibrations. (author)

  10. Towards better environmental performance of wastewater sludge treatment using endpoint approach in LCA methodology

    Directory of Open Access Journals (Sweden)

    Isam Alyaseri

    2017-03-01

    Full Text Available The aim of this study is to use the life cycle assessment method to measure the environmental performance of the sludge incineration process in a wastewater treatment plant and to propose an alternative that can reduce the environmental impact. To show the damages caused by the treatment processes, the study aimed to use an endpoint approach in evaluating impacts on human health, ecosystem quality, and resources due to the processes. A case study was taken at Bissell Point Wastewater Treatment Plant in Saint Louis, Missouri, U.S. The plant-specific data along with literature data from technical publications were used to build an inventory, and then analyzed the environmental burdens from sludge handling unit in the year 2011. The impact assessment method chosen was ReCipe 2008. The existing scenario (dewatering-multiple hearth incineration-ash to landfill was evaluated and three alternative scenarios (fluid bed incineration and anaerobic digestion with and without land application with energy recovery from heat or biogas were proposed and analyzed to find the one with the least environmental impact. The existing scenario shows that the most significant impacts are related to depletion in resources and damage to human health. These impacts mainly came from the operation phase (electricity and fuel consumption and emissions related to combustion. Alternatives showed better performance than the existing scenario. Using ReCipe endpoint methodology, and among the three alternatives tested, the anaerobic digestion had the best overall environmental performance. It is recommended to convert to fluid bed incineration if the concerns were more about human health or to anaerobic digestion if the concerns were more about depletion in resources. The endpoint approach may simplify the outcomes of this study as follows: if the plant is converted to fluid bed incineration, it could prevent an average of 43.2 DALYs in human life, save 0.059 species in the area

  11. Discovering the Effects-Endstate Linkage: Using Soft Systems Methodology to Perform EBO Mission Analysis

    National Research Council Canada - National Science Library

    Young, Jr, William E

    2005-01-01

    .... EBO mission analysis is shown to be more problem structuring than problem solving. A new mission analysis process is proposed using a modified version of Soft Systems Methodology to meet these challenges...

  12. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh; Ravva, Mahesh Kumar; Wang, Tonghui; Bredas, Jean-Luc

    2016-01-01

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus

  13. Least square methods and covariance matrix applied to the relative efficiency calibration of a Ge(Li) detector

    International Nuclear Information System (INIS)

    Geraldo, L.P.; Smith, D.L.

    1989-01-01

    The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt

  14. Sandia WIPP calibration traceability

    Energy Technology Data Exchange (ETDEWEB)

    Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.

  15. Sandia WIPP calibration traceability

    International Nuclear Information System (INIS)

    Schuhen, M.D.; Dean, T.A.

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities

  16. Methods to produce calibration mixtures for anesthetic gas monitors and how to perform volumetric calculations on anesthetic gases.

    Science.gov (United States)

    Christensen, P L; Nielsen, J; Kann, T

    1992-10-01

    A simple procedure for making calibration mixtures of oxygen and the anesthetic gases isoflurane, enflurane, and halothane is described. One to ten grams of the anesthetic substance is evaporated in a closed, 11,361-cc glass bottle filled with oxygen gas at atmospheric pressure. The carefully mixed gas is used to calibrate anesthetic gas monitors. By comparison of calculated and measured volumetric results it is shown that at atmospheric conditions the volumetric behavior of anesthetic gas mixtures can be described with reasonable accuracy using the ideal gas law. A procedure is described for calculating the deviation from ideal gas behavior in cases in which this is needed.

  17. Development of evaluation methodology to assess the sodium fire suppression performance of leak collection tray

    International Nuclear Information System (INIS)

    Parida, F.C.; Rao, P.M.; Ramesh, S.S.; Somayajulu, P.A.; Malarvizhi, B.; Kannan, S.E.

    2005-01-01

    Full text of publication follows: Leakage of hot liquid sodium and its subsequent combustion in the form of a pool cannot be completely ruled out in a Fast breeder Reactor (FBR) plant in spite of provision for adequate safety measures. To protect the plant system from the hazardous effects of flame, heat and smoke, one of the passive protection devices used in FBR plants is the Leak Collection Tray (LCT). The design of LCT is based on immediate channeling of burning liquid sodium on the funnel shaped sloping cover tray (SCT) to the bottom sodium hold-up vessel (SHV) in which self-extinction of the fire occurs due to oxygen starvation. The SCT has one or three drain pipes and air vent pipes depending on the type of design. In each experiment, a known amount ranging from 30 to 40 kg of hot liquid sodium at 550 deg. C was discharged on the LCT in the open air. Continuous on-line monitoring of temperature at strategic locations (∼ 28 points) was carried out. Colour video-graphy was employed for taking motion pictures of various time-dependent events like sodium dumping, appearance of flame and release of smoke through vent pipes. After self-extinction of sodium fire, the LCT was allowed to cool overnight in an argon atmosphere. Solid samples of sodium debris in the SCT and SHV were collected by manual core drilling machine. The samples were subjected to chemical analysis for determination of unburnt and burnt sodium. The sodium debris removed from SCT and SHV were separately weighed. To assess the performance of the LCT, two different geometrical configurations of SCT, one made up of stainless steel an the other of carbon steel, were used. Three broad phenomena are identified as the basis of evaluation methodology. These are (a) thermal transients, i.e. heating and cooling of the bulk sodium in SCT and SHV respectively, (b) post test sodium debris distribution between SCT and SHV as well as (c) sodium combustion and smoke release behaviour. Under each category

  18. Development of evaluation methodology to assess the sodium fire suppression performance of leak collection tray

    Energy Technology Data Exchange (ETDEWEB)

    Parida, F.C.; Rao, P.M.; Ramesh, S.S.; Somayajulu, P.A.; Malarvizhi, B.; Kannan, S.E. [Engineering Safety Division, Safety Group, Indira Gandhi Centre for Atomic Research, Kalpakkam - 603102, Tamilnadu (India)

    2005-07-01

    Full text of publication follows: Leakage of hot liquid sodium and its subsequent combustion in the form of a pool cannot be completely ruled out in a Fast breeder Reactor (FBR) plant in spite of provision for adequate safety measures. To protect the plant system from the hazardous effects of flame, heat and smoke, one of the passive protection devices used in FBR plants is the Leak Collection Tray (LCT). The design of LCT is based on immediate channeling of burning liquid sodium on the funnel shaped sloping cover tray (SCT) to the bottom sodium hold-up vessel (SHV) in which self-extinction of the fire occurs due to oxygen starvation. The SCT has one or three drain pipes and air vent pipes depending on the type of design. In each experiment, a known amount ranging from 30 to 40 kg of hot liquid sodium at 550 deg. C was discharged on the LCT in the open air. Continuous on-line monitoring of temperature at strategic locations ({approx} 28 points) was carried out. Colour video-graphy was employed for taking motion pictures of various time-dependent events like sodium dumping, appearance of flame and release of smoke through vent pipes. After self-extinction of sodium fire, the LCT was allowed to cool overnight in an argon atmosphere. Solid samples of sodium debris in the SCT and SHV were collected by manual core drilling machine. The samples were subjected to chemical analysis for determination of unburnt and burnt sodium. The sodium debris removed from SCT and SHV were separately weighed. To assess the performance of the LCT, two different geometrical configurations of SCT, one made up of stainless steel an the other of carbon steel, were used. Three broad phenomena are identified as the basis of evaluation methodology. These are (a) thermal transients, i.e. heating and cooling of the bulk sodium in SCT and SHV respectively, (b) post test sodium debris distribution between SCT and SHV as well as (c) sodium combustion and smoke release behaviour. Under each category

  19. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  20. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  1. Calibration Lessons Learned from Hyperion Experience

    Science.gov (United States)

    Casement, S.; Ho, K.; Sandor-Leahy, S.; Biggar, S.; Czapla-Myers, J.; McCorkel, J.; Thome, K.

    2009-12-01

    The use of hyperspectral imagers to provide climate-quality data sets, such as those expected from the solar reflective sensor on the Climate Absolute Radiance and Refractivity Observatory (CLARREO), requires stringent radiometric calibration requirements. These stringent requirements have been nearly met with broadband radiometers such as CERES, but high resolution spectrometers pose additional challenges. A review of the calibration processes for past space-based HSIs provide guidance on the calibration processes that will be needed for future sensors. In November 2000, the Earth Observer-1 (EO-1) platform was launched onboard a Boeing Delta II launch vehicle. The primary purpose of the EO-1 mission was to provide a technological testbed for spaceborne components. The platform has three sensors onboard, of which, the hyperspectral imager (HSI) Hyperion, is discussed here. The Hyperion sensor at the time had no comparable sensor in earth orbit, being the first grating-based, hyperspectral, civilian sensor in earth orbit. Ground and on-orbit calibration procedures including all cross-calibration activities have achieved an estimated instrument absolute radiometric error of 2.9% in the Visible channel (0.4 - 1.0 microns) and 3.4% in the shortwave infrared (SWIR, 0.9 - 2.5 microns) channel (EO-1/Hyperion Early Orbit Checkout Report Part II On-Orbit Performance Verification and Calibration). This paper describes the key components of the Hyperion calibration process that are applicable to future HSI missions. The pre-launch methods relied on then newly-developed, detector-based methods. Subsequent vicarious methods including cross-calibration with other sensors and the reflectance-based method showed significant differences from the prelaunch calibration. Such a difference demonstrated the importance of the vicarious methods as well as pointing to areas for improvement in the prelaunch methods. We also identify areas where lessons learned from Hyperion regarding

  2. Lidar to lidar calibration

    DEFF Research Database (Denmark)

    Fernandez Garcia, Sergio; Villanueva, Héctor

    This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....

  3. Bridging the gap between neurocognitive processing theory and performance validity assessment among the cognitively impaired: a review and methodological approach.

    Science.gov (United States)

    Leighton, Angela; Weinborn, Michael; Maybery, Murray

    2014-10-01

    Bigler (2012) and Larrabee (2012) recently addressed the state of the science surrounding performance validity tests (PVTs) in a dialogue highlighting evidence for the valid and increased use of PVTs, but also for unresolved problems. Specifically, Bigler criticized the lack of guidance from neurocognitive processing theory in the PVT literature. For example, individual PVTs have applied the simultaneous forced-choice methodology using a variety of test characteristics (e.g., word vs. picture stimuli) with known neurocognitive processing implications (e.g., the "picture superiority effect"). However, the influence of such variations on classification accuracy has been inadequately evaluated, particularly among cognitively impaired individuals. The current review places the PVT literature in the context of neurocognitive processing theory, and identifies potential methodological factors to account for the significant variability we identified in classification accuracy across current PVTs. We subsequently evaluated the utility of a well-known cognitive manipulation to provide a Clinical Analogue Methodology (CAM), that is, to alter the PVT performance of healthy individuals to be similar to that of a cognitively impaired group. Initial support was found, suggesting the CAM may be useful alongside other approaches (analogue malingering methodology) for the systematic evaluation of PVTs, particularly the influence of specific neurocognitive processing components on performance.

  4. Combining soft system methodology and pareto analysis in safety management performance assessment : an aviation case

    NARCIS (Netherlands)

    Karanikas, Nektarios

    2016-01-01

    Although reengineering is strategically advantageous for organisations in order to keep functional and sustainable, safety must remain a priority and respective efforts need to be maintained. This paper suggests the combination of soft system methodology (SSM) and Pareto analysis on the scope of

  5. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.

  6. One step geometrical calibration method for optical coherence tomography

    International Nuclear Information System (INIS)

    Díaz, Jesús Díaz; Ortmaier, Tobias; Stritzel, Jenny; Rahlves, Maik; Reithmeier, Eduard; Roth, Bernhard; Majdani, Omid

    2016-01-01

    We present a novel one-step calibration methodology for geometrical distortion correction for optical coherence tomography (OCT). A calibration standard especially designed for OCT is introduced, which consists of an array of inverse pyramidal structures. The use of multiple landmarks situated on four different height levels on the pyramids allow performing a 3D geometrical calibration. The calibration procedure itself is based on a parametric model of the OCT beam propagation. It is validated by experimental results and enables the reduction of systematic errors by more than one order of magnitude. In future, our results can improve OCT image reconstruction and interpretation for medical applications such as real time monitoring of surgery. (paper)

  7. New calibration method using low cost MEM IMUs to verify the performance of UAV-borne MMS payloads.

    Science.gov (United States)

    Chiang, Kai-Wei; Tsai, Meng-Lun; Naser, El-Sheimy; Habib, Ayman; Chu, Chien-Hsun

    2015-03-19

    Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV)-borne mobile mapping system (MMS) can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility than other means. This study proposes a long endurance Direct Geo-referencing (DG)-based fixed-wing UAV photogrammetric platform and two DG modules that each use different commercial Micro-Electro Mechanical Systems' (MEMS) tactical grade Inertial Measurement Units (IMUs). Furthermore, this study develops a novel kinematic calibration method which includes lever arms, boresight angles and camera shutter delay to improve positioning accuracy. The new calibration method is then compared with the traditional calibration approach. The results show that the accuracy of the DG can be significantly improved by flying at a lower altitude using the new higher specification hardware. The new proposed method improves the accuracy of DG by about 20%. The preliminary results show that two-dimensional (2D) horizontal DG positioning accuracy is around 5.8 m at a flight height of 300 m using the newly designed tactical grade integrated Positioning and Orientation System (POS). The positioning accuracy in three-dimensions (3D) is less than 8 m.

  8. New Calibration Method Using Low Cost MEM IMUs to Verify the Performance of UAV-Borne MMS Payloads

    Directory of Open Access Journals (Sweden)

    Kai-Wei Chiang

    2015-03-01

    Full Text Available Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV-borne mobile mapping system (MMS can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility than other means. This study proposes a long endurance Direct Geo-referencing (DG-based fixed-wing UAV photogrammetric platform and two DG modules that each use different commercial Micro-Electro Mechanical Systems’ (MEMS tactical grade Inertial Measurement Units (IMUs. Furthermore, this study develops a novel kinematic calibration method which includes lever arms, boresight angles and camera shutter delay to improve positioning accuracy. The new calibration method is then compared with the traditional calibration approach. The results show that the accuracy of the DG can be significantly improved by flying at a lower altitude using the new higher specification hardware. The new proposed method improves the accuracy of DG by about 20%. The preliminary results show that two-dimensional (2D horizontal DG positioning accuracy is around 5.8 m at a flight height of 300 m using the newly designed tactical grade integrated Positioning and Orientation System (POS. The positioning accuracy in three-dimensions (3D is less than 8 m.

  9. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  10. Performance analysis for disposal of mixed low-level waste. 1: Methodology

    International Nuclear Information System (INIS)

    Waters, R.D.; Gruebel, M.M.

    1999-01-01

    A simple methodology has been developed for evaluating the technical capabilities of potential sites for disposal of mixed low-level radioactive waste. The results of the evaluation are expressed as permissible radionuclide concentrations in disposed waste. The methodology includes an analysis of three separate pathways: (1) releases of radionuclides to groundwater; (2) releases of potentially volatile radionuclides to the atmosphere; and (3) the consequences of inadvertent intrusion into a disposal facility. For each radionuclide, its limiting permissible concentration in disposed waste is the lowest of the permissible concentrations determined from each of the three pathways. These permissible concentrations in waste at an evaluated site can be used to assess the capability of the site to dispose of waste streams containing multiple radionuclides

  11. Time, Non-representational Theory and the "Performative Turn"—Towards a New Methodology in Qualitative Social Research

    Directory of Open Access Journals (Sweden)

    Peter Dirksmeier

    2008-05-01

    Full Text Available Because of their constitution, the usage of performative techniques in qualitative social research must deal with a paradox. Acting as performance takes place in the present and it takes place just once. One result of this is that every representation of a performance be it as text, discussion or film refers to the past. Performative social research solves this paradox by conceptualising performance as a kind of liminal phase of a ritual. Our thesis is that by simple outsourcing the problem of present in the theory of ritual, performative techniques commit the logical mistake of genetic fallacy, i.e., the mistake of forgetting that the primary value or meaning of an event has no necessary connections with its genesis in history. Therefore, a new methodology for qualitative social research after the performative turn requires a theoretical position which does not fall back to a position of causality as the temporal consequence of a cause and effect, as maintained by ritual theory. In this essay we suggest a "non-representational theory" for this venture, and point out how a methodology for qualitative research could be constituted "after" the performative turn. URN: urn:nbn:de:0114-fqs0802558

  12. A proposed methodology for performing risk analysis of state radiation control programs

    International Nuclear Information System (INIS)

    Dornsife, W.P.

    1996-01-01

    This paper is comprised of viewgraphs from a conference presentation. Topics discussed include barriers to effective risk assessment and management, and real versus perceived risk for various radiation programs in the state of Pennsylvania. Calculation results for Pennsylvania are provided for low-level radioactive waste transportation risks, indoor radon risk, and cancer morbidity risk from x-rays. A methodology for prioritizing radiation regulatory programs based on risk is presented with calculations for various Pennsylvania programs

  13. Squaring the Project Management Circle: Updating the Cost, Schedule, and Performance Methodology

    Science.gov (United States)

    2016-04-30

    MIT Sloan Management Review . MIT Sloan Management Review . Retrieved from http://sloanreview.mit.edu/article/the-new-industrial-engineering-information-technology-and- business-process-redesign/ ...critical variable that must be addressed by project managers . The research methodology consists of a system-focused approach based on an extensive review ...doi.org/10.1016/j.ijproman.2007.01.004 Baccarini, D. (1996). The concept of project complexity–A review .

  14. Performance Evaluation and Measurement of the Organization in Strategic Analysis and Control: Methodological Aspects

    OpenAIRE

    Živan Ristić; Neđo Balaban

    2006-01-01

    Information acquired by measuring and evaluation are a necessary condition for good decision-making in strategic management. This work deals with : (a) Methodological aspects of evaluation (kinds of evaluation, metaevaluation) and measurement (supposition of isomorphism in measurement, kinds and levels of measurement, errors in measurement and the basic characteristics of measurement) (b) Evaluation and measurement of potential and accomplishments of the organization in Kaplan-Norton perspect...

  15. Scanner calibration revisited

    Directory of Open Access Journals (Sweden)

    Pozhitkov Alexander E

    2010-07-01

    Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  16. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels

    Science.gov (United States)

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-01-01

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH2O EC-10, ECH2O EC-20, ECH2O EC-5, and ECH2O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH2O EC-5 and ECH2O TE, which also performed surprisingly well in saline conditions. PMID:27854263

  17. The effects of overtime work and task complexity on the performance of nuclear plant operators: A proposed methodology

    International Nuclear Information System (INIS)

    Banks, W.W.; Potash, L.

    1985-01-01

    This document presents a very general methodology for determining the effect of overtime work and task complexity on operator performance in response to simulated out-of-limit nuclear plant conditions. The independent variables consist of three levels of overtime work and three levels of task complexity. Multiple dependent performance measures are proposed for use and discussion. Overtime work is operationally defined in terms of the number of hours worked by nuclear plant operators beyond the traditional 8 hours per shift. Task complexity is operationalized in terms of the number of operator tasks required to remedy a given plant anomalous condition and bring the plant back to a ''within limits'' or ''normal'' steady-state condition. The proposed methodology would employ a 2 factor repeated measures design along with the analysis of variance (linear) model

  18. SPOTS Calibration Example

    Directory of Open Access Journals (Sweden)

    Patterson E.

    2010-06-01

    Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.

  19. Calibrating nacelle lidars

    Energy Technology Data Exchange (ETDEWEB)

    Courtney, M.

    2013-01-15

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)

  20. “How many sums can I do”? : Performative strategies and diffractive thinking as methodological tools for rethinking mathematical subjectivity

    OpenAIRE

    Palmer, Anna

    2011-01-01

    The aim of this article is to illustrate how the understanding of mathematical subjectivity changes when transiting theoretically and methodologically from a discursive and performative thinking, as suggested by Judith Butler (1990, 1993, 1997), to an agential realist and diffractive thinking, inspired by Karen Barad’s theories (2007, 2008). To show this I have examined narrative memory stories about mathematics written by students participating in Teacher Education maths courses. I pro...

  1. Performance evaluation of CT measurements made on step gauges using statistical methodologies

    DEFF Research Database (Denmark)

    Angel, J.; De Chiffre, L.; Kruth, J.P.

    2015-01-01

    In this paper, a study is presented in which statistical methodologies were applied to evaluate the measurement of step gauges on an X-ray computed tomography (CT) system. In particular, the effects of step gauge material density and orientation were investigated. The step gauges consist of uni......- and bidirectional lengths. By confirming the repeatability of measurements made on the test system, the number of required scans in the design of experiment (DOE) was reduced. The statistical model was checked using model adequacy principles; model adequacy checking is an important step in validating...

  2. Performance of neutron activation analysis in the evaluation of bismuth iodide purification methodology

    International Nuclear Information System (INIS)

    Armelin, Maria Jose A.; Ferraz, Caue de Mello; Hamada, Margarida M.

    2015-01-01

    Bismuth tri-iodide (BrI 3 ) is an attractive material for using as a semiconductor. In this paper, BiI 3 crystals have been grown by the vertical Bridgman technique using commercially available powder. The impurities were evaluated by instrumental neutron activation analysis (INAA). The results show that INAA is an analytical method appropriate for monitoring the impurities of: Ag, As, Br, Cr, K, Mo, Na and Sb in the various stages of the BiI 3 purification methodology. (author)

  3. Effects and detection of raw material variability on the performance of near-infrared calibration models for pharmaceutical products.

    Science.gov (United States)

    Igne, Benoit; Shi, Zhenqi; Drennen, James K; Anderson, Carl A

    2014-02-01

    The impact of raw material variability on the prediction ability of a near-infrared calibration model was studied. Calibrations, developed from a quaternary mixture design comprising theophylline anhydrous, lactose monohydrate, microcrystalline cellulose, and soluble starch, were challenged by intentional variation of raw material properties. A design with two theophylline physical forms, three lactose particle sizes, and two starch manufacturers was created to test model robustness. Further challenges to the models were accomplished through environmental conditions. Along with full-spectrum partial least squares (PLS) modeling, variable selection by dynamic backward PLS and genetic algorithms was utilized in an effort to mitigate the effects of raw material variability. In addition to evaluating models based on their prediction statistics, prediction residuals were analyzed by analyses of variance and model diagnostics (Hotelling's T(2) and Q residuals). Full-spectrum models were significantly affected by lactose particle size. Models developed by selecting variables gave lower prediction errors and proved to be a good approach to limit the effect of changing raw material characteristics. Hotelling's T(2) and Q residuals provided valuable information that was not detectable when studying only prediction trends. Diagnostic statistics were demonstrated to be critical in the appropriate interpretation of the prediction of quality parameters. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  4. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William; Dietiker, Jean-François; Li, Tingwen; Sarkar, Avik; Sun, Xin

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of their inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.

  5. Instrumentation calibration

    International Nuclear Information System (INIS)

    Mack, D.A.

    1976-08-01

    Procedures for the calibration of different types of laboratory equipment are described. Provisions for maintaining the integrity of reference and working standards traceable back to a national standard are discussed. Methods of validation and certification methods are included. An appendix lists available publications and services of national standardizing agencies

  6. Performance Assessment of the Pico OWC Power Plant Following the Equimar Methodology

    DEFF Research Database (Denmark)

    Pecher, Arthur; Crom, I. Le; Kofoed, Jens Peter

    2011-01-01

    This paper presents the power performance of the Oscillating Water Column (OWC) wave energy converter installed on the Island of Pico. The performance assessment of the device is based on real performance data gathered over the last years during normal operation. In addition to the estimation...

  7. Methodology for Selection of Economic Performance Factors in the Area of Information and Communication Activities

    Directory of Open Access Journals (Sweden)

    Jana Hornungová

    2015-01-01

    Full Text Available The article presents one part of the research work of the author that is focused on the business performance. The aim of this paper is to find and introduce economic factors of corporate performance system that are important part of the performance, because can help to organization define and measure progress toward organizational goals. The aim also included the determination of Key Performance Indicators (KPIs. The first step for the evaluation of performance is the projective access. This approach is meant, that the performance in terms of the future development of the company it is possible to conclude on the basis of, and ongoing activities. In relation to this idea are as fundamental the economic indicators of the performance scale. To find these factors were used the theoretical information from the area of KPIs and data from primary research. This data were tested through mathematical-statistical analysis, in this case, directly on the basis of factor analysis.

  8. The professional methodological teaching performance of the professor of Physical education. Set of parameters for its measurement

    Directory of Open Access Journals (Sweden)

    Orlando Pedro Suárez Pérez

    2017-07-01

    Full Text Available This work was developed due to the need to attend to the difficulties found in the Physical Education teachers of the municipality of San Juan and Martínez during the development of the teaching-learning process of Basketball, which threaten the quality of the classes, sports results and preparation of the School for life. The objective is to propose parameters that allow measuring the professional teaching methodological performance of these teachers. The customized behavior of the research made possible the diagnosis of the 26 professors taken as a sample, expressing the traits that distinguish their efficiency, determining their potentialities and deficiencies. During the research process, theoretical, empirical and statistical methods were used, which permitted to corroborate the real existence of the problem, as well as the evaluation of its impact, which revealed a positive transformation in pedagogical practice. The results provide a concrete and viable answer for the improvement of the evaluation of the teaching-methodological component of the Physical Education teacher, which constitutes an important material of guidance for methodologists and managers related to the instrumental cognitive, procedural and attitudinal performance , In order to conduct from the precedent knowledge, the new knowledge and lead to a formative process, with a contemporary vision, offering methodological resources to control the quality of Physical Education lessons.

  9. The definitive analysis of the Bendandi's methodology performed with a specific software

    Science.gov (United States)

    Ballabene, Adriano; Pescerelli Lagorio, Paola; Georgiadis, Teodoro

    2015-04-01

    The presentation aims to clarify the "Method Bendandi" supposed, in the past, to be able to forecast earthquakes and never let expressly resolved by the geophysicist from Faenza to posterity. The geoethics implications of the Bendandi's forecasts, and those that arise around the speculation of possible earthquakes inferred from suppositories "Bendandiane" methodologies, rose up in previous years caused by social alarms during supposed occurrences of earthquakes which never happened but where widely spread by media following some 'well informed' non conventional scientists. The analysis was conducted through an extensive literature search of the archive 'Raffaele Bendandi' at Geophy sical Observatory of Faenza and the forecasts analyzed utilising a specially developed software, called "Bendandiano Dashboard", that can reproduce the planetary configurations reported in the graphs made by Italian geophysicist. This analysis should serve to clarify 'definitively' what are the basis of the Bendandi's calculations as well as to prevent future unwarranted warnings issued on the basis of supposed prophecies and illusory legacy documents.

  10. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  11. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    Science.gov (United States)

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  12. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Reda, Ibrahim; Robinson, Justin

    2016-06-02

    This study addresses the effect of calibration methodologies on calibration responsivities and the resulting impact on radiometric measurements. The calibration responsivities used in this study are provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides outdoor calibration responsivity of pyranometers and pyrheliometers at a 45 degree solar zenith angle and responsivity as a function of solar zenith angle determined by clear-sky comparisons to reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison of the test radiometer under calibration to a reference radiometer of the same type. These different methods of calibration demonstrated 1percent to 2 percent differences in solar irradiance measurement. Analyzing these values will ultimately enable a reduction in radiometric measurement uncertainties and assist in developing consensus on a standard for calibration.

  13. The performance of the INER improved free-air ionization chamber in the comparison of air kerma calibration coefficients for medium-energy X-rays

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.-H. E-mail: jhlee@iner.gov.tw; Kotler, L.H.; Bueermann, Ludwig; Hwang, W.-S.; Chiu, J.-H.; Wang, C.-F

    2005-01-01

    This paper describes modifications to an original design, correction factors and uncertainty evaluations for an improved free-air ionization chamber constructed at the Institute of Nuclear Energy Research (INER, Taiwan). In addition, a comparison of secondary standard air kerma calibration coefficients for 100-250 kV medium-energy X-rays was performed to verify the experimental accuracy and measurement consistency of the improved chamber. The comparison results showed a satisfactory agreement in the measurements which were within the combined expanded uncertainties (k=2)

  14. A Probabilistic Design Methodology for a Turboshaft Engine Overall Performance Analysis

    Directory of Open Access Journals (Sweden)

    Min Chen

    2014-05-01

    Full Text Available In reality, the cumulative effect of the many uncertainties in engine component performance may stack up to affect the engine overall performance. This paper aims to quantify the impact of uncertainty in engine component performance on the overall performance of a turboshaft engine based on Monte-Carlo probabilistic design method. A novel probabilistic model of turboshaft engine, consisting of a Monte-Carlo simulation generator, a traditional nonlinear turboshaft engine model, and a probability statistical model, was implemented to predict this impact. One of the fundamental results shown herein is that uncertainty in component performance has a significant impact on the engine overall performance prediction. This paper also shows that, taking into consideration the uncertainties in component performance, the turbine entry temperature and overall pressure ratio based on the probabilistic design method should increase by 0.76% and 8.33%, respectively, compared with the ones of deterministic design method. The comparison shows that the probabilistic approach provides a more credible and reliable way to assign the design space for a target engine overall performance.

  15. Stepwise-refinement for performance: a methodology for many-core programming

    NARCIS (Netherlands)

    Hijma, P.; van Nieuwpoort, R.V.; Jacobs, C.J.H.; Bal, H.E.

    2015-01-01

    Many-core hardware is targeted specifically at obtaining high performance, but reaching high performance is often challenging because hardware-specific details have to be taken into account. Although there are many programming systems that try to alleviate many-core programming, some providing a

  16. Towards a performance assessment methodology using computational simulation for air distribution system designs in operating rooms

    NARCIS (Netherlands)

    Melhado, M.D.A.

    2012-01-01

    One of the important performance requirements for an air distribution system for an operating room (OR) is to provide good indoor environmental conditions in which to perform operations. Important conditions in this respect relate to the air quality and to the thermal conditions for the surgical

  17. Expert Performance Transfer - Making Knowledge Transfer Count with ExPerT Methodology

    International Nuclear Information System (INIS)

    Turner, C.L.; Braudt, T.E.

    2011-01-01

    'Knowledge Transfer' is a high-priority imperative as the nuclear industry faces the combined effects of an aging workforce and economic pressures to do more with less. Knowledge Transfer is only a part of the solution to these challenges, however. The more compelling and immediate need faced by industry is Accomplishment Transfer, or the transference of the applied knowledge necessary to assure optimal performance transfer from experienced, high-performing staff to inexperienced staff. A great deal of industry knowledge and required performance information has been documented in the form of procedures. Often under-appreciated either as knowledge stores or as drivers of human performance, procedures, coupled with tightly-focused and effective training, are arguably the most effective influences on human and plant performance. (author)

  18. The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points

    Directory of Open Access Journals (Sweden)

    Felipe A. L. Costa

    2018-02-01

    Full Text Available The accuracy of photogrammetric and Lidar dataset integration is dependent on the quality of a group of parameters that models accurately the conditions of the system at the moment of the survey. In this sense, this paper aims to study the effect of the sub-block position in the entire image block to estimate the interior orientation parameters (IOP in flight conditions to be used in integrated sensor orientation (ISO. For this purpose, five sub-blocks were extracted in different regions of the entire block. Then, in situ camera calibrations were performed using sub-blocks and sets of Lidar control points (LCPs, computed by a three planes’ intersection extracted from the Lidar point cloud on building roofs. The ISO experiments were performed using IOPs from in situ calibrations, the entire image block, and the exterior orientation parameters (EOP from the direct sensor orientation (DSO. Analysis of the results obtained from the ISO experiments performed show that the IOP from the sub-block positioned at the center of the entire image block can be recommended.

  19. Methodologies for assessing long-term performance of high-level radioactive waste packages

    International Nuclear Information System (INIS)

    Stephens, K.; Boesch, L.; Crane, B.; Johnson, R.; Moler, R.; Smith, S.; Zaremba, L.

    1986-01-01

    Several methods the Nuclear Regulatory Commission (NRC) can use to independently assess Department of Energy (DOE) waste package performance were identified by The Aerospace Corporation. The report includes an overview of the necessary attributes of performance assessment, followed by discussions of DOE methods, probabilistic methods capable of predicting waste package lifetime and radionuclide releases, process modeling of waste package barriers, sufficiency of the necessary input data, and the applicability of probability density functions. It is recommended that the initial NRC performance assessment (for the basalt conceptual waste package design) should apply modular simulation, using available process models and data, to demonstrate this assessment method

  20. Performing Ecosystem Services at Mud Flats in Seocheon, Korea: Using Q Methodology for Cooperative Decision Making

    Directory of Open Access Journals (Sweden)

    Jae-Hyuck Lee

    2017-05-01

    Full Text Available The concept of ecosystem services, which are the direct and indirect benefits of nature to humans, has been established as a supporting tool to increase the efficiency in decision-making regarding environmental planning. However, preceding studies on decision-making in relation to ecosystem services have been limited to identifying differences in perception, whereas few studies have reported cooperative alternatives. Therefore, this study aimed to present a method for cooperative decision-making among ecosystem service stakeholders using Q methodology. The results showed three perspectives on ecosystem services of small mud flat areas: ecological function, ecotourism, and human activity. The perspectives on cultural services and regulating services were diverse, whereas those on supporting services were similar. Thus, supporting services were considered crucial for the cooperative assessment and management of small mud flat ecosystems as well as for the scientific evaluation of regulating services. Furthermore, this study identified practical implementation measures to increase production through land management, to manufacture related souvenirs, and to link them to ecotourism. Overall, our results demonstrated the ideal process of cooperative decision-making to improve ecosystem services.

  1. Enhancing plant performance in newer CANDU plants utilizing PLiM methodologies

    International Nuclear Information System (INIS)

    Azeez, S.; Krishnan, V.S.; Nickerson, J.H.; Kakaria, B.

    2002-01-01

    Over the past 5 years, Atomic Energy of Canada Ltd. (AECL) has been working with CANDU utilities on comprehensive and integrated CANDU PLiM programs for successful and reliable operation through design life and beyond. Considerable progress has been made in the development of CANDU PLiM methodologies and implementation of the outcomes at the plants. The basis of CANDU PLiM programs is to understand the ageing degradation mechanisms, prevent/minimize the effects of these phenomena in the Critical Structures, Systems and Components (CSSCs), and maintain the CSSC condition as close as possible in the best operating condition. Effective plant practices in surveillance, maintenance, and operations are the primary means of managing ageing. From the experience to date, the CANDU PLiM program will modify and enhance, but not likely replace, existing plant programs that address ageing. However, a successful PLiM program will provide assurance that these existing plant programs are both effective and can be shown to be effective, in managing ageing. This requires a structured and managed approach to both the assessment and implementation processes

  2. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Response surface methodology for sensitivity and uncertainty analysis: performance and perspectives

    International Nuclear Information System (INIS)

    Olivi, L.; Brunelli, F.; Cacciabue, P.C.; Parisi, P.

    1985-01-01

    Two main aspects have to be taken into account in studying a nuclear accident scenario when using nuclear safety codes as an information source. The first one concerns the behavior of the code response and the set of assumptions to be introduced for its modelling. The second one is connected with the uncertainty features of the code input, often modelled as a probability density function (pdf). The analyst can apply two well-defined approaches depending on whether he wants major emphasis put on either of the aspects. Response Surface Methodology uses polynomial and inverse polynomial models together with the theory of experimental design, expressly developed for the identification procedure. It constitutes a well-established body of techniques able to cover a wide spectrum of requirements, when the first aspect plays the crucial role in the definition of the objectives. Other techniques such as Latin hypercube sampling, stratified sampling or even random sampling can fit better, when the second aspect affects the reliability of the analysis. The ultimate goal for both approaches is the selection of the variable, i.e. the identification of the code input variables most effective on the output and the uncertainty propagation, i.e. the assessment of the pdf to be attributed to the code response. The main aim of this work is to present a sensitivity analysis method, already tested on a real case, sufficiently flexible to be applied in both approaches mentioned

  4. Evaluating the methodology and performance of jetting and flooding of granular backfill materials.

    Science.gov (United States)

    2014-11-01

    Compaction of backfill in confined spaces on highway projects is often performed with small vibratory plates, based : solely on the experience of the contractor, leading to inadequate compaction. As a result, the backfill is prone to : erosion and of...

  5. A New Methodology for the Integration of Performance Materials into the Clothing Curriculum

    OpenAIRE

    Power, Jess

    2014-01-01

    This paper presents a model for integrating the study of performance materials into the clothing curriculum. In recent years there has been an increase in demand for stylish, functional and versatile sports apparel. Analysts predict this will reach US$126.30 billion by 2015. This growth is accredited to dramatic lifestyle changes and increasing participation in sports/leisurely pursuits particularly by women. The desire to own performance clothing for specific outdoor pursuits is increasing a...

  6. Implied Volatility Surface: Construction Methodologies and Characteristics

    OpenAIRE

    Cristian Homescu

    2011-01-01

    The implied volatility surface (IVS) is a fundamental building block in computational finance. We provide a survey of methodologies for constructing such surfaces. We also discuss various topics which can influence the successful construction of IVS in practice: arbitrage-free conditions in both strike and time, how to perform extrapolation outside the core region, choice of calibrating functional and selection of numerical optimization algorithms, volatility surface dynamics and asymptotics.

  7. A Compact Methodology to Understand, Evaluate, and Predict the Performance of Automatic Target Recognition

    Science.gov (United States)

    Li, Yanpeng; Li, Xiang; Wang, Hongqiang; Chen, Yiping; Zhuang, Zhaowen; Cheng, Yongqiang; Deng, Bin; Wang, Liandong; Zeng, Yonghu; Gao, Lei

    2014-01-01

    This paper offers a compacted mechanism to carry out the performance evaluation work for an automatic target recognition (ATR) system: (a) a standard description of the ATR system's output is suggested, a quantity to indicate the operating condition is presented based on the principle of feature extraction in pattern recognition, and a series of indexes to assess the output in different aspects are developed with the application of statistics; (b) performance of the ATR system is interpreted by a quality factor based on knowledge of engineering mathematics; (c) through a novel utility called “context-probability” estimation proposed based on probability, performance prediction for an ATR system is realized. The simulation result shows that the performance of an ATR system can be accounted for and forecasted by the above-mentioned measures. Compared to existing technologies, the novel method can offer more objective performance conclusions for an ATR system. These conclusions may be helpful in knowing the practical capability of the tested ATR system. At the same time, the generalization performance of the proposed method is good. PMID:24967605

  8. Cost optimal building performance requirements. Calculation methodology for reporting on national energy performance requirements on the basis of cost optimality within the framework of the EPBD

    Energy Technology Data Exchange (ETDEWEB)

    Boermans, T.; Bettgenhaeuser, K.; Hermelink, A.; Schimschar, S. [Ecofys, Utrecht (Netherlands)

    2011-05-15

    On the European level, the principles for the requirements for the energy performance of buildings are set by the Energy Performance of Buildings Directive (EPBD). Dating from December 2002, the EPBD has set a common framework from which the individual Member States in the EU developed or adapted their individual national regulations. The EPBD in 2008 and 2009 underwent a recast procedure, with final political agreement having been reached in November 2009. The new Directive was then formally adopted on May 19, 2010. Among other clarifications and new provisions, the EPBD recast introduces a benchmarking mechanism for national energy performance requirements for the purpose of determining cost-optimal levels to be used by Member States for comparing and setting these requirements. The previous EPBD set out a general framework to assess the energy performance of buildings and required Member States to define maximum values for energy delivered to meet the energy demand associated with the standardised use of the building. However it did not contain requirements or guidance related to the ambition level of such requirements. As a consequence, building regulations in the various Member States have been developed by the use of different approaches (influenced by different building traditions, political processes and individual market conditions) and resulted in different ambition levels where in many cases cost optimality principles could justify higher ambitions. The EPBD recast now requests that Member States shall ensure that minimum energy performance requirements for buildings are set 'with a view to achieving cost-optimal levels'. The cost optimum level shall be calculated in accordance with a comparative methodology. The objective of this report is to contribute to the ongoing discussion in Europe around the details of such a methodology by describing possible details on how to calculate cost optimal levels and pointing towards important factors and

  9. Going beyond a First Reader: A Machine Learning Methodology for Optimizing Cost and Performance in Breast Ultrasound Diagnosis.

    Science.gov (United States)

    Venkatesh, Santosh S; Levenback, Benjamin J; Sultan, Laith R; Bouzghar, Ghizlane; Sehgal, Chandra M

    2015-12-01

    The goal of this study was to devise a machine learning methodology as a viable low-cost alternative to a second reader to help augment physicians' interpretations of breast ultrasound images in differentiating benign and malignant masses. Two independent feature sets consisting of visual features based on a radiologist's interpretation of images and computer-extracted features when used as first and second readers and combined by adaptive boosting (AdaBoost) and a pruning classifier resulted in a very high level of diagnostic performance (area under the receiver operating characteristic curve = 0.98) at a cost of pruning a fraction (20%) of the cases for further evaluation by independent methods. AdaBoost also improved the diagnostic performance of the individual human observers and increased the agreement between their analyses. Pairing AdaBoost with selective pruning is a principled methodology for achieving high diagnostic performance without the added cost of an additional reader for differentiating solid breast masses by ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  10. Application of Response Surface Methodology (RSM for Optimization of Operating Parameters and Performance Evaluation of Cooling Tower Cold Water Temperature

    Directory of Open Access Journals (Sweden)

    Ramkumar RAMAKRISHNAN

    2012-01-01

    Full Text Available The performance of a cooling tower was analyzed with various operating parameters tofind the minimum cold water temperature. In this study, optimization of operating parameters wasinvestigated. An experimental design was carried out based on central composite design (CCD withresponse surface methodology (RSM. This paper presents optimum operating parameters and theminimum cold water temperature using the RSM method. The RSM was used to evaluate the effectsof operating variables and their interaction towards the attainment of their optimum conditions.Based on the analysis, air flow, hot water temperature and packing height were high significanteffect on cold water temperature. The optimum operating parameters were predicted using the RSMmethod and confirmed through experiment.

  11. Calibration Uncertainties in the Droplet Measurement Technologies Cloud Condensation Nuclei Counter

    Science.gov (United States)

    Hibert, Kurt James

    average surface pressure at Grand Forks, ND. The supersaturation calibration uncertainty is 2.3, 3.1, and 4.4 % for calibrations done at 700, 840, and 980 hPa respectively. The supersaturation calibration change with pressure is on average 0.047 % supersaturation per 100 hPa. The supersaturation calibrations done at UND are 42-45 % lower than supersaturation calibrations done at DMT approximately 1 year previously. Performance checks confirmed that all major leaks developed during shipping were fixed before conducting the supersaturation calibrations. Multiply-charged particles passing through the Electrostatic Classifier may have influenced DMT's activation curves, which is likely part of the supersaturation calibration difference. Furthermore, the fitting method used to calculate the activation size and the limited calibration points are likely significant sources of error in DMT's supersaturation calibration. While the DMT CCN counter's calibration uncertainties are relatively small, and the pressure dependence is easily accounted for, the calibration methodology used by different groups can be very important. The insights gained from the careful calibration of the DMT CCN counter indicate that calibration of scientific instruments using complex methodology is not trivial.

  12. Methodology for performing RF reliability experiments on a generic test structure

    NARCIS (Netherlands)

    Sasse, G.T.; de Vries, Rein J.; Schmitz, Jurriaan

    2007-01-01

    This paper discusses a new technique developed for generating well defined RF large voltage swing signals for on wafer experiments. This technique can be employed for performing a broad range of different RF reliability experiments on one generic test structure. The frequency dependence of a

  13. Performance characterization of night vision equipment based on Triangle Orientation Discrimination (TOD) methodology

    NARCIS (Netherlands)

    Laurent, N.; Lejard, C.; Deltel, G.; Bijl, P.

    2013-01-01

    Night vision equipment is crucial in order to accomplish supremacy and safety of the troops on the battlefield. Evidently, system integrators, MODs and end-users need access to reliable quantitative characterization of the expected field performance when using night vision equipment. The Image

  14. Calibration bench of flowmeters

    International Nuclear Information System (INIS)

    Bremond, J.; Da Costa, D.; Calvet, A.; Vieuxmaire, C.

    1966-01-01

    This equipment is devoted to the comparison of signals from two turbines installed in the Cabri experimental loop. The signal is compared to the standard turbine. The characteristics and the performance of the calibration bench are presented. (A.L.B.)

  15. Evaluating long-term performance of in situ vitrified waste forms: Methodology and results

    International Nuclear Information System (INIS)

    McGrail, B.P.; Olson, K.M.

    1992-11-01

    In situ vitrification (ISV) is an emerging technology for the remediation of hazardous and radioactive waste sites. The concept relies on the principle of Joule heating to raise the temperature of a soil between an array of electrodes above the melting temperature. After cooling, the melt solidifies into a massive glass and crystalline block similar to naturally occurring obsidian. Determining the long-term performance of ISV products in a changing regulatory environment requires a fundamental understanding of the mechanisms controlling the dissolution behavior of the material. A series of experiments was performed to determine the dissolution behavior of samples produced from the ISV processing of typical soils from the Idaho National Engineering Laboratory subsurface disposal area. Dissolution rate constant measurements were completed at 90 degrees C over the pH range 2 to 11 for one sample obtained from a field test of the ISV process

  16. A Performance Measurement and Implementation Methodology in a Department of Defense CIM (Computer Integrated Manufacturing) Environment

    Science.gov (United States)

    1988-01-24

    vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S

  17. The Plumbing of Land Surface Models: Is Poor Performance a Result of Methodology or Data Quality?

    Science.gov (United States)

    Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.; Or, Dani; Best, Martin J.; Johnson, Helen R.; Balsamo, Gianpaolo; Boone, Aaron; Cuntz, Matthais; Decharme, Bertrand; hide

    2016-01-01

    The PALS Land sUrface Model Benchmarking Evaluation pRoject (PLUMBER) illustrated the value of prescribing a priori performance targets in model intercomparisons. It showed that the performance of turbulent energy flux predictions from different land surface models, at a broad range of flux tower sites using common evaluation metrics, was on average worse than relatively simple empirical models. For sensible heat fluxes, all land surface models were outperformed by a linear regression against downward shortwave radiation. For latent heat flux, all land surface models were outperformed by a regression against downward shortwave, surface air temperature and relative humidity. These results are explored here in greater detail and possible causes are investigated. We examine whether particular metrics or sites unduly influence the collated results, whether results change according to time-scale aggregation and whether a lack of energy conservation in fluxtower data gives the empirical models an unfair advantage in the intercomparison. We demonstrate that energy conservation in the observational data is not responsible for these results. We also show that the partitioning between sensible and latent heat fluxes in LSMs, rather than the calculation of available energy, is the cause of the original findings. Finally, we present evidence suggesting that the nature of this partitioning problem is likely shared among all contributing LSMs. While we do not find a single candidate explanation forwhy land surface models perform poorly relative to empirical benchmarks in PLUMBER, we do exclude multiple possible explanations and provide guidance on where future research should focus.

  18. Vibration transducer calibration techniques

    Science.gov (United States)

    Brinkley, D. J.

    1980-09-01

    Techniques for the calibration of vibration transducers used in the Aeronautical Quality Assurance Directorate of the British Ministry of Defence are presented. Following a review of the types of measurements necessary in the calibration of vibration transducers, the performance requirements of vibration transducers, which can be used to measure acceleration, velocity or vibration amplitude, are discussed, with particular attention given to the piezoelectric accelerometer. Techniques for the accurate measurement of sinusoidal vibration amplitude in reference-grade transducers are then considered, including the use of a position sensitive photocell and the use of a Michelson laser interferometer. Means of comparing the output of working-grade accelerometers with that of previously calibrated reference-grade devices are then outlined, with attention given to a method employing a capacitance bridge technique and a method to be used at temperatures between -50 and 200 C. Automatic calibration procedures developed to speed up the calibration process are outlined, and future possible extensions of system software are indicated.

  19. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  20. Sandia National Laboratories performance assessment methodology for long-term environmental programs : the history of nuclear waste management.

    Energy Technology Data Exchange (ETDEWEB)

    Marietta, Melvin Gary; Anderson, D. Richard; Bonano, Evaristo J.; Meacham, Paul Gregory (Raytheon Ktech, Albuquerque, NM)

    2011-11-01

    Sandia National Laboratories (SNL) is the world leader in the development of the detailed science underpinning the application of a probabilistic risk assessment methodology, referred to in this report as performance assessment (PA), for (1) understanding and forecasting the long-term behavior of a radioactive waste disposal system, (2) estimating the ability of the disposal system and its various components to isolate the waste, (3) developing regulations, (4) implementing programs to estimate the safety that the system can afford to individuals and to the environment, and (5) demonstrating compliance with the attendant regulatory requirements. This report documents the evolution of the SNL PA methodology from inception in the mid-1970s, summarizing major SNL PA applications including: the Subseabed Disposal Project PAs for high-level radioactive waste; the Waste Isolation Pilot Plant PAs for disposal of defense transuranic waste; the Yucca Mountain Project total system PAs for deep geologic disposal of spent nuclear fuel and high-level radioactive waste; PAs for the Greater Confinement Borehole Disposal boreholes at the Nevada National Security Site; and PA evaluations for disposal of high-level wastes and Department of Energy spent nuclear fuels stored at Idaho National Laboratory. In addition, the report summarizes smaller PA programs for long-term cover systems implemented for the Monticello, Utah, mill-tailings repository; a PA for the SNL Mixed Waste Landfill in support of environmental restoration; PA support for radioactive waste management efforts in Egypt, Iraq, and Taiwan; and, most recently, PAs for analysis of alternative high-level radioactive waste disposal strategies including repositories deep borehole disposal and geologic repositories in shale and granite. Finally, this report summarizes the extension of the PA methodology for radioactive waste disposal toward development of an enhanced PA system for carbon sequestration and storage systems

  1. Calibration of sources for alpha spectroscopy systems

    International Nuclear Information System (INIS)

    Freitas, I.S.M.; Goncalez, O.L.

    1992-01-01

    This paper describes the calibration methodology for measuring the total alpha activity of plane and thin sources with the Alpha Spectrometer for Silicon Detector in the Nuclear Measures and Dosimetry laboratory at IEAv/CTA. (author)

  2. Development of a High Performance PES Ultrafiltration Hollow Fiber Membrane for Oily Wastewater Treatment Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Noor Adila Aluwi Shakir

    2015-12-01

    Full Text Available This study attempts to optimize the spinning process used for fabricating hollow fiber membranes using the response surface methodology (RSM. The spinning factors considered for the experimental design are the dope extrusion rate (DER, air gap length (AGL, coagulation bath temperature (CBT, bore fluid ratio (BFR, and post-treatment time (PT whilst the response investigated is rejection. The optimal spinning conditions promising the high rejection performance of polyethersulfone (PES ultrafiltration hollow fiber membranes for oily wastewater treatment are at the dope extrusion rate of 2.13 cm3/min, air gap length of 0 cm, coagulation bath temperature of 30 °C, and bore fluid ratio (NMP/H2O of 0.01/99.99 wt %. This study will ultimately enable the membrane fabricators to produce high-performance membranes that contribute towards the availability of a more sustainable water supply system.

  3. A Methodology to Reduce the Computational Effort in the Evaluation of the Lightning Performance of Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ilaria Bendato

    2016-11-01

    Full Text Available The estimation of the lightning performance of a power distribution network is of great importance to design its protection system against lightning. An accurate evaluation of the number of lightning events that can create dangerous overvoltages requires a huge computational effort, as it implies the adoption of a Monte Carlo procedure. Such a procedure consists of generating many different random lightning events and calculating the corresponding overvoltages. The paper proposes a methodology to deal with the problem in two computationally efficient ways: (i finding out the minimum number of Monte Carlo runs that lead to reliable results; and (ii setting up a procedure that bypasses the lightning field-to-line coupling problem for each Monte Carlo run. The proposed approach is shown to provide results consistent with existing approaches while exhibiting superior Central Processing Unit (CPU time performances.

  4. Methodology to determine the technical performance and value proposition for grid-scale energy storage systems :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Loose, Verne William; Donnelly, Matthew K.; Trudnowski, Daniel J.

    2012-12-01

    As the amount of renewable generation increases, the inherent variability of wind and photovoltaic systems must be addressed in order to ensure the continued safe and reliable operation of the nation's electricity grid. Grid-scale energy storage systems are uniquely suited to address the variability of renewable generation and to provide other valuable grid services. The goal of this report is to quantify the technical performance required to provide di erent grid bene ts and to specify the proper techniques for estimating the value of grid-scale energy storage systems.

  5. Calibration and validation of a model for simulating thermal and electric performance of an internal combustion engine-based micro-cogeneration device

    International Nuclear Information System (INIS)

    Rosato, A.; Sibilio, S.

    2012-01-01

    The growing worldwide demand for more efficient and less polluting forms of energy production has led to a renewed interest in the use of micro-cogeneration technologies in the residential. Among the others technologies, internal combustion engine-based micro-cogeneration devices are a market-ready technology gaining an increasing appeal thanks to their high efficiency, fuel flexibility, low emissions, low noise and vibration. In order to explore and assess the feasibility of using internal combustion engine-based cogeneration systems in the residential sector, an accurate and practical simulation model that can be used to conduct sensitivity and what-if analyses is needed. A residential cogeneration device model has been developed within IEA/ECBCS Annex 42 and implemented into a number of building simulation programs. This model is potentially able to accurately predict the thermal and electrical outputs of the residential cogeneration devices, but it relies almost entirely on empirical data because the model specification uses experimental measurements contained within a performance map to represent the device specific performance characteristics coupled with thermally massive elements to characterize the device's dynamic thermal performance. At the Built Environment Control Laboratory of Seconda Università degli studi di Napoli, an AISIN SEIKI micro-cogeneration device based on natural gas fuelled reciprocating internal combustion engine is available. This unit has been intensively tested in order to calibrate and validate the Annex 42 model. This paper shows in detail the series of experiments conducted for the calibration activity and examines the validity of this model by contrasting simulation predictions to measurements derived by operating the system in electric load following control strategy. The statistical comparison was made both for the whole database and the segregated data by system mode operation. The good agreement found in the predictions of

  6. Overview of hypersonic CFD code calibration studies

    Science.gov (United States)

    Miller, Charles G.

    1987-01-01

    The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.

  7. Calibration of well-type ionization chambers

    International Nuclear Information System (INIS)

    Alves, C.F.E.; Leite, S.P.; Pires, E.J.; Magalhaes, L.A.G.; David, M.G.; Almeida, C.E. de

    2015-01-01

    This paper presents the methodology developed by the Laboratorio de Ciencias Radiologicas and presently in use for determining of the calibration coefficient for well-type chambers used in the dosimetry of 192 Ir high dose rate sources. Uncertainty analysis involving the calibration procedure are discussed. (author)

  8. Determination of the analytical performance of a headspace capillary gas chromatographic technique and karl Fischer coulometric titration by system calibration using oil samples containing known amounts of moisture.

    Science.gov (United States)

    Jalbert, J; Gilbert, R; Tétreault, P

    1999-08-01

    Over the past few years, concerns have been raised in the literature about the accuracy of the Karl Fischer (KF) method for assessing moisture in transformer mineral oils. To better understand this issue, the performance of a static headspace capillary gas chromatographic (HS-CGC) technique was compared to that of KF coulometric titration by analyzing moisture in samples containing known amounts of water and various samples obtained from the National Institute of Standards and Technology (NIST). Two modes of adding samples into the KF vessel were used:  direct injection and indirect injection via an azeotropic distillation of the moisture with toluene. Under the conditions used for direct injection, the oil matrix was totally dissolved in the anolyte, which allowed the moisture to be titrated in a single-phase solution rather than in a suspension. The results have shown that when HS-CGC and combined azeotropic distillation/KF titration are calibrated with moisture-in-oil standards, a linear relation is observed over 0-60 ppm H(2)O with a correlation coefficient better than 0.9994 (95% confidence), with the regression line crossing through zero. A similar relation can also be observed when calibration is achieved by direct KF addition of standards prepared with octanol-1, but in this case an intercept of 4-5 ppm is noted. The amount of moisture determined by curve interpolation in NIST reference materials by the three calibrated systems ranges from 13.0 to 14.8 ppm for RM 8506 and 42.5 to 46.4 ppm for RM 8507, and in any case, the results were as high as those reported in the literature with volumetric KF titration. However, titration of various dehydrated oil and solvent samples showed that direct KF titration is affected by a small bias when samples contain very little moisture. The source of error after correction for the large sample volume used for the determination (8 mL) is about 6 ppm for Voltesso naphthenic oil and 4 ppm for toluene, revealing a matrix

  9. Methodological progress in the development of scenarios for ENRESA-2000 Performance assessment exercise

    International Nuclear Information System (INIS)

    Cortes Martin, A.

    2000-01-01

    ENRESA is carrying out a new safety assessment exercise for a deep geological spent fuel disposal facility located in granite, known as ENRESA-2000. One of the main objectives of this safety analysis is the integration and implementation of all R and D studies performed to date by ENRESA, as well as the identification of those aspects of the assessment which require further investigation. One of the main activities of this exercise is the selection and development of the scenarios to be quantitatively analysed during the assessment, where a scenario is defined as a sufficient number of FEPs (ie relevant features, events and processes) as well as their influence relationships, which explain the behaviour of the disposal system. As a result of these three methods, a definitive list of FEPs will be obtained for the ENRESA-2000 exercise. Once grouped into scenarios, these FEPs will be used to model and calculate consequences. This process of generation and development of scenarios for the ENRESA-2000 performance assessment exercise is presented in this paper. (Author)

  10. High Gain Antenna Calibration on Three Spacecraft

    Science.gov (United States)

    Hashmall, Joseph A.

    2011-01-01

    This paper describes the alignment calibration of spacecraft High Gain Antennas (HGAs) for three missions. For two of the missions (the Lunar Reconnaissance Orbiter and the Solar Dynamics Observatory) the calibration was performed on orbit. For the third mission (the Global Precipitation Measurement core satellite) ground simulation of the calibration was performed in a calibration feasibility study. These three satellites provide a range of calibration situations-Lunar orbit transmitting to a ground antenna for LRO, geosynchronous orbit transmitting to a ground antenna fer SDO, and low Earth orbit transmitting to TDRS satellites for GPM The calibration results depend strongly on the quality and quantity of calibration data. With insufficient data the calibration Junction may give erroneous solutions. Manual intervention in the calibration allowed reliable parameters to be generated for all three missions.

  11. Analysis of quaternary ammonium and phosphonium ionic liquids by reversed-phase high-performance liquid chromatography with charged aerosol detection and unified calibration.

    Science.gov (United States)

    Stojanovic, Anja; Lämmerhofer, Michael; Kogelnig, Daniel; Schiesel, Simone; Sturm, Martin; Galanski, Markus; Krachler, Regina; Keppler, Bernhard K; Lindner, Wolfgang

    2008-10-31

    Several hydrophobic ionic liquids (ILs) based on long-chain aliphatic ammonium- and phosphonium cations and selected aromatic anions were analyzed by reversed-phase high-performance liquid chromatography (RP-HPLC) employing trifluoroacetic acid as ion-pairing additive to the acetonitrile-containing mobile phase and adopting a step-gradient elution mode. The coupling of charged aerosol detection (CAD) for the non-chromophoric aliphatic cations with diode array detection (DAD) for the aromatic anions allowed their simultaneous analysis in a set of new ILs derived from either tricaprylmethylammonium chloride (Aliquat 336) and trihexyltetradecylphosphonium chloride as precursors. Aliquat 336 is a mix of ammonium cations with distinct aliphatic chain lengths. In the course of the studies it turned out that CAD generates an identical detection response for all the distinct aliphatic cations. Due to lack of single component standards of the individual Aliquat 336 cation species, a unified calibration function was established for the quantitative analysis of the quaternary ammonium cations of the ILs. The developed method was validated according to ICH guidelines, which confirmed the validity of the unified calibration. The application of the method revealed molar ratios of cation to anion close to 1 indicating a quantitative exchange of the chloride ions of the precursors by the various aromatic anions in the course of the synthesis of new ILs. Anomalies of CAD observed for the detection of some aromatic anions (thiosalicylate and benzoate) are discussed.

  12. Using integrated control methodology to optimize energy performance for the guest rooms in UAE hospitality sector

    International Nuclear Information System (INIS)

    AlFaris, Fadi; Abu-Hijleh, Bassam; Abdul-Ameer, Alaa

    2016-01-01

    Highlights: • Energy efficiency in 4 and 5 star luxury hotels in the United Arab Emirates. • The normalized energy use index (EUI) ranges between 241.5 and 348.4 kWh/m"2/year for post 2003 hotels. • The normalized energy use index (EUI) ranges between 348.4 and 511.1 kWh/m"2/year for pre 2003 hotels. • Integrated HVAC and lighting control strategies can reduce total energy consumption by up to 31.5%. - Abstract: The hospitality sector is growing rapidly in the UAE and especially in Dubai. As a result, it contributes substantially in the UAE's carbon footprint. This research was conducted to measure, evaluate and increase the energy efficiency in 4 and 5 star luxury hotels in UAE. Energy benchmarking analysis was used to analyze the energy data of 19 hotel buildings to differentiate between usual and best practice of energy performance. Moreover, the normalized energy use index (EUI) kWh/m"2/year has been identified for the best, usual and poor practice hotels. It was found that the normalized EUI ranges between 241.5 kWh/m"2/year or less as a best practice to more than 361.3 kWh/m"2/year of the poor energy practice for the hotels constructed after the year of 2003. Whereas the hotels' energy data showed higher values for those constructed before 2003, as the normalized EUI varies between 348.4 kWh/m"2/year as best practice to more than 511.1 kWh/m"2/year. An integrated control strategy has been employed to improve the energy performance and assess the energy saving for the guestroom. This technique showed that the overall energy performance improvement reached to 31.5% out of entire energy consumption of the hotel including electricity and gas. This reduction resulted in 43.2% savings from the cooling system and 13.2% from the lighting system due to the installing of the integrated control system in the guestrooms.

  13. High performance shape annealing matrix (HPSAM) methodology for core protection calculators

    International Nuclear Information System (INIS)

    Cha, K. H.; Kim, Y. H.; Lee, K. H.

    1999-01-01

    In CPC(Core Protection Calculator) of CE-type nuclear power plants, the core axial power distribution is calculated to evaluate the safety-related parameters. The accuracy of the CPC axial power distribution highly depends on the quality of the so called shape annealing matrix(SAM). Currently, SAM is determined by using data measured during startup test and used throughout the entire cycle. An issue concerned with SAM is that it is fairly sensitive to measurements and thus the fidelity of SAM is not guaranteed for all cycles. In this paper, a novel method to determine a high-performance SAM (HPSAM) is proposed, where both measured and simulated data are used in determining SAM

  14. Assessing Confidence in Performance Assessments Using an Evidence Support Logic Methodology: An Application of Tesla

    International Nuclear Information System (INIS)

    Egan, M.; Paulley, A.; Lehman, L.; Lowe, J.; Rochette, E.; Baker, St.

    2009-01-01

    The assessment of uncertainties and their implications is a key requirement when undertaking performance assessment (PA) of radioactive waste facilities. Decisions based on the outcome of such assessments become translated into judgments about confidence in the information they provide. This confidence, in turn, depends on uncertainties in the underlying evidence. Even if there is a large amount of information supporting an assessment, it may be only partially relevant, incomplete or less than completely reliable. In order to develop a measure of confidence in the outcome, sources of uncertainty need to be identified and adequately addressed in the development of the PA, or in any overarching strategic decision-making processes. This paper describes a trial application of the technique of Evidence Support Logic (ESL), which has been designed for application in support of 'high stakes' decisions, where important aspects of system performance are subject to uncertainty. The aims of ESL are to identify the amount of uncertainty or conflict associated with evidence relating to a particular decision, and to guide understanding of how evidence combines to support confidence in judgments. Elicitation techniques are used to enable participants in the process to develop a logical hypothesis model that best represents the relationships between different sources of evidence to the proposition under examination. The aim is to identify key areas of subjectivity and other sources of potential bias in the use of evidence (whether for or against the proposition) to support judgments of confidence. Propagation algorithms are used to investigate the overall implications of the logic according to the strength of the underlying evidence and associated uncertainties. (authors)

  15. MCDA-C Methodology Based Performance Evaluation of Small and Medium-Sized Businesses at the City of Lages

    Directory of Open Access Journals (Sweden)

    Marcelo Nascimento

    2013-12-01

    Full Text Available When employed in a focused manner, corporate performance evaluation has proven to be instrumental for entrepreneurs as an important tool that contributes with performance improvements at their organizations. The descriptive study herein, prepared as of a questionnaire comprising 46 queries, poses to analyse the performance of micro and small companies (MSEs by employing the multicriteria methodology for constructive decision aiding (MCDA-C. As of respondent replies, MCDA-C descriptors were formed, shaping six prime groups so as to identify relevant factors that drive or hinder MSE success. The questionnaire was applied to managers in charge administering 25 small and medium-sized companies of Lages, a city within the Brazilian state of Santa Catarina. Study findings provide evidence as to the fact that (i 24% of surveyed companies, tend to go bankrupt; (ii managerial functions at the MSEs are the prime source of influence on negative outcomes; (iii from a financial control standpoint, surveyed companies fall far shorter than the minimum level deemed necessary to qualify as satisfactory; (iv those that present the best results, operate both within the domestic and international markets; (v the study placed under the spotlight the group “Evolution Stage”, evidencing the trend of ever increasing MSE expansion. This study revealed that corporate failure contributing factors are intensely interconnected and largely depend on the entrepreneur´s own performance, the prime contribution of findings residing in demonstrating that MCDA-C can be employed to analyse the performance of micro and small businesses.

  16. Development and validation of methodology for technetium-99m radiopharmaceuticals using high performance liquid chromatography (HPLC)

    International Nuclear Information System (INIS)

    Almeida, Erika Vieira de

    2009-01-01

    Radiopharmaceuticals are compounds, with no pharmacological action, which have a radioisotope in their composition and are used in Nuclear Medicine for diagnosis and therapy of several diseases. In this work, the development and validation of an analytical method for 99 mTc-HSA, 99 mTc-EC, 99 mTc-ECD and 99 mTc-Sestamibi radiopharmaceuticals and for some raw materials were carried out by high performance liquid chromatography (HPLC). The analyses were performed in a Shimadzu HPLC equipment, LC-20AT Prominence model. Some impurities were identified by the addition of a reference standard substance. Validation of the method was carried out according to the criteria defined in RE n. 899/2003 of the National Sanitary Agency (ANVISA). The results for robustness of the method showed that it is necessary to control flow rate conditions, sample volume, pH of the mobile phase and temperature of the oven. The analytical curves were linear in the concentration ranges, with linear correlation coefficients (r 2 ) above 0.9995. The results for precision, accuracy and recovery showed values in the range of 0.07-4.78%, 95.38-106.50% and 94.40-100.95%, respectively. The detection limits and quantification limits varied from 0.27 to 5.77 μg mL -1 and 0.90 to 19.23 μg mL -1 , respectively. The values for HAS, EC, ECD and MIBI in the lyophilized reagents were 8.95; 0.485; 0.986 and 0.974 mg L-1, respectively. The mean radiochemical purity for 99 mTc-HSA, 99 mTc-EC, 99 mTc-ECD and 99 mTc-Sestamibi was (97.28 ± 0.09)%, (98.96 ± 0.03)%, (98.96 ± 0.03)% and (98.07 ± 0.01)%, respectively. All the parameters recommended by ANVISA were evaluated and the results are below the established limits. (author)

  17. Methods for the definition of calibration intervals and to perform cost-efectiveness analysis in management systems

    OpenAIRE

    Ribeiro, A.; Lages Martins, L.; Sousa, J. A.; Baptista, R.

    2009-01-01

    Management systems developed according to ISO/IEC 17025 must fulfil several requirements, being particularly relevant the assurance of the measurement results quality. The enhancement of that quality depends upon several conditions, one of which is the instrumentation metrological performance and its service conditions along its lifetime. In this context, the evaluation of long time drifts in the instrumentation metrological performance is a major concern requiring the developm...

  18. Performance of uncertainty quantification methodologies and linear solvers in cardiovascular simulations

    Science.gov (United States)

    Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison

    2017-11-01

    Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.

  19. Clinical performance of an objective methodology to categorize tear film lipid layer patterns

    Science.gov (United States)

    Garcia-Resua, Carlos; Pena-Verdeal, Hugo; Giraldez, Maria J.; Yebra-Pimentel, Eva

    2017-08-01

    Purpose: To validate the performance of a new objective application designated iDEAS (Dry Eye Assessment System) to categorize different zones of lipid layer patterns (LLPs) in one image. Material and methods: Using the Tearscopeplus and a digital camera attached to a slit-lamp, 50 images were captured and analyzed by 4 experienced optometrists. In each image the observers outlined tear film zones that they clearly identified as a specific LLP. Further, the categorization made by the 4 optometrists (called observer 1, 2, 3 and 4) was compared with the automatic system included in iDEAS (5th observer). Results: In general, observer 3 classified worse than all observers (observers 1, 2, 4 and automatic application, Wilcoxon test, 0.05). Furthermore, we obtained a set of photographs per LLP category for which all optometrists showed agreement by using the new tool. After examining them, we detected the more characteristic features for each LLP to enhance the description of the patterns implemented by Guillon. Conclusions: The automatic application included in the iDEAS framework is able to provide zones similar to the annotations made by experienced optometrists. Thus, the manual process done by experts can be automated with the benefits of being unaffected by subjective factors.

  20. Evaluation of the performance of diagnosis-related groups and similar casemix systems: methodological issues.

    Science.gov (United States)

    Palmer, G; Reid, B

    2001-05-01

    With the increasing recognition and application of casemix for managing and financing healthcare resources, the evaluation of alternative versions of systems such as diagnosis-related groups (DRGs) has been afforded high priority by governments and researchers in many countries. Outside the United States, an important issue has been the perceived need to produce local versions, and to establish whether or not these perform more effectively than the US-based classifications. A discussion of casemix evaluation criteria highlights the large number of measures that may be used, the rationale and assumptions underlying each measure, and the problems in interpreting the results. A review of recent evaluation studies from a number of countries indicates that considerable emphasis has been placed on the predictive validity criterion, as measured by the R2 statistic. However, the interpretation of the findings has been affected greatly by the methods used, especially the treatment and definition of outlier cases. Furthermore, the extent to which other evaluation criteria have been addressed has varied widely. In the absence of minimum evaluation standards, it is not possible to draw clear-cut conclusions about the superiority of one version of a casemix system over another, the need for a local adaptation, or the further development of an existing version. Without the evidence provided by properly designed studies, policy-makers and managers may place undue reliance on subjective judgments and the views of the most influential, but not necessarily best informed, healthcare interest groups.

  1. Visual methodologies and participatory action research: Performing women's community-based health promotion in post-Katrina New Orleans.

    Science.gov (United States)

    Lykes, M Brinton; Scheib, Holly

    2016-01-01

    Recovery from disaster and displacement involves multiple challenges including accompanying survivors, documenting effects, and rethreading community. This paper demonstrates how African-American and Latina community health promoters and white university-based researchers engaged visual methodologies and participatory action research (photoPAR) as resources in cross-community praxis in the wake of Hurricane Katrina and the flooding of New Orleans. Visual techniques, including but not limited to photonarratives, facilitated the health promoters': (1) care for themselves and each other as survivors of and responders to the post-disaster context; (2) critical interrogation of New Orleans' entrenched pre- and post-Katrina structural racism as contributing to the racialised effects of and responses to Katrina; and (3) meaning-making and performances of women's community-based, cross-community health promotion within this post-disaster context. This feminist antiracist participatory action research project demonstrates how visual methodologies contributed to the co-researchers' cross-community self- and other caring, critical bifocality, and collaborative construction of a contextually and culturally responsive model for women's community-based health promotion post 'unnatural disaster'. Selected limitations as well as the potential for future cross-community antiracist feminist photoPAR in post-disaster contexts are discussed.

  2. A methodology of uncertainty/sensitivity analysis for PA of HLW repository learned from 1996 WIPP performance assessment

    International Nuclear Information System (INIS)

    Lee, Y. M.; Kim, S. K.; Hwang, Y. S.; Kang, C. H.

    2002-01-01

    The WIPP (Waste Isolation Pilot Plant) is a mined repository constructed by the US DOE for the permanent disposal of transuranic (TRU) wastes generated by activities related to defence of the US since 1970. Its historical disposal operation began in March 1999 following receipt of a final permit from the State of NM after a positive certification decision for the WIPP was issued by the EPA in 1998, as the first licensed facility in the US for the deep geologic disposal of radioactive wastes. The CCA (Compliance Certification Application) for the WIPP that the DOE submitted to the EPA in 1966 was supported by an extensive Performance Assessment (PA) carried out by Sandia National Laboratories (SNL), with so-called 1996 PA. Even though such PA methodologies could be greatly different from the way we consider for HLW disposal in Korea largely due to quite different geologic formations in which repository are likely to be located, a review on lots of works done through the WIPP PA studies could be the most important lessons that we can learn from in view of current situation in Korea where an initial phase of conceptual studies on HLW disposal has been just started. The objective of this work is an overview of the methodology used in the recent WIPP PA to support the US DOE WIPP CCA ans a proposal for Korean case

  3. Novel experimental methodology for the characterization of thermodynamic performance of advanced working pairs for adsorptive heat transformers

    International Nuclear Information System (INIS)

    Frazzica, Andrea; Sapienza, Alessio; Freni, Angelo

    2014-01-01

    This paper presents a novel experimental protocol for the evaluation of the thermodynamic performance of working pairs for application in adsorption heat pumps and chillers. The proposed approach is based on the experimental measurements of the main thermo-physical parameters of adsorbent pairs, by means of a DSC/TG apparatus modified to work under saturated vapour conditions, able to measure the ads-/desorption isobars and heat flux as well as the adsorbent specific heat under real boundary conditions. Such kind of activity allows to characterize the thermodynamic performance of an adsorbent pair allowing the estimation of the thermal Coefficient Of Performance (COP) both for heating and cooling applications, only relying on experimental values. The experimental uncertainty of the method has been estimated to be around 2%, for the COP evaluation. In order to validate the proposed procedure, a first test campaign has been carried out on the commercial adsorbent material, AQSOA-Z02, produced by MPI (Mitsubishi Plastics Inc.), while water was used as refrigerant. The proposed experimental methodology will be applied on several other adsorbent materials, either already on the market or still under investigation, in order to get an easy and reliable method to compare thermodynamic performance of adsorptive working pairs

  4. Summary of KOMPSAT-5 Calibration and Validation

    Science.gov (United States)

    Yang, D.; Jeong, H.; Lee, S.; Kim, B.

    2013-12-01

    including pointing, relative and absolute calibration as well as geolocation accuracy determination. The absolute calibration will be accomplished by determining absolute radiometric accuracy using already deployed trihedral corner reflectors on calibration and validation sites located southeast from Ulaanbaatar, Mongolia. To establish a measure for the assess the final image products, geolocation accuracies of image products with different imaging modes will be determined by using deployed point targets and available Digital Terrain Model (DTM), and on different image processing levels. In summary, this paper will present calibration and validation activities performed during the LEOP and IOT of KOMPSAT-5. The methodology and procedure of calibration and validation will be explained as well as its results. Based on the results, the applications of SAR image products on geophysical processes will be also discussed.

  5. Methodological considerations in a pilot study on the effects of a berry enriched smoothie on children's performance in school.

    Science.gov (United States)

    Rosander, Ulla; Rumpunen, Kimmo; Olsson, Viktoria; Åström, Mikael; Rosander, Pia; Wendin, Karin

    2017-01-01

    Berries contain bioactive compounds that may affect children's cognitive function positively, while hunger and thirst during lessons before lunch affect academic performance negatively. This pilot study addresses methodological challenges in studying if a berry smoothie, offered to schoolchildren as a mid-morning beverage, affects academic performance. The objective was to investigate if a cross-over design can be used to study these effects in a school setting. Therefore, in order to investigate assay sensitivity, 236 Swedish children aged 10-12 years were administered either a berry smoothie (active) or a fruit-based control beverage after their mid-morning break. Both beverages provided 5% of child daily energy intake. In total, 91% of participants completed the study. Academic performance was assessed using the d2 test of attention. Statistical analyses were performed using the Wilcoxon signed rank test in StatXact v 10.3. The results showed that the children consumed less of the active berry smoothie than the control (154 g vs. 246 g). Both beverages increased attention span and concentration significantly (p = 0.000). However, as there was no significant difference (p = 0.938) in the magnitude of this effect between the active and control beverages, the assay sensitivity of the study design was not proven. The effect of the beverages on academic performance was attributed the supplementation of water and energy. Despite careful design, the active smoothie was less accepted than the control. This could be explained by un-familiar sensory characteristics and peer influence, stressing the importance of sensory similarity and challenges to perform a study in school settings. The employed cross-over design did not reveal any effects of bioactive compound consumption on academic performance. In future studies, the experimental set up should be modified or replaced by e.g. the parallel study design, in order to provide conclusive results.

  6. Do Work Placements Improve Final Year Academic Performance or Do High-Calibre Students Choose to Do Work Placements?

    Science.gov (United States)

    Jones, C. M.; Green, J. P.; Higson, H. E.

    2017-01-01

    This study investigates whether the completion of an optional sandwich work placement enhances student performance in final year examinations. Using Propensity Score Matching, our analysis departs from the literature by controlling for self-selection. Previous studies may have overestimated the impact of sandwich work placements on performance…

  7. Implementation of a methodology to perform the uncertainty and sensitivity analysis of the control rod drop in a BWR

    Energy Technology Data Exchange (ETDEWEB)

    Reyes F, M. del C.

    2015-07-01

    A methodology to perform uncertainty and sensitivity analysis for the cross sections used in a Trace/PARCS coupled model for a control rod drop transient of a BWR-5 reactor was implemented with the neutronics code PARCS. A model of the nuclear reactor detailing all assemblies located in the core was developed. However, the thermohydraulic model designed in Trace was a simple model, where one channel representing all the types of assemblies located in the core, it was located inside a simple vessel model and boundary conditions were established. The thermohydraulic model was coupled with the neutronics model, first for the steady state and then a Control Rod Drop (CRD) transient was performed, in order to carry out the uncertainty and sensitivity analysis. To perform the analysis of the cross sections used in the Trace/PARCS coupled model during the transient, Probability Density Functions (PDFs) were generated for the 22 parameters cross sections selected from the neutronics parameters that PARCS requires, thus obtaining 100 different cases for the Trace/PARCS coupled model, each with a database of different cross sections. All these cases were executed with the coupled model, therefore obtaining 100 different outputs for the CRD transient with special emphasis on 4 responses per output: 1) The reactivity, 2) the percentage of rated power, 3) the average fuel temperature and 4) the average coolant density. For each response during the transient an uncertainty analysis was performed in which the corresponding uncertainty bands were generated. With this analysis it is possible to observe the results ranges of the responses chose by varying the uncertainty parameters selected. This is very useful and important for maintaining the safety in the nuclear power plants, also to verify if the uncertainty band is within of safety margins. The sensitivity analysis complements the uncertainty analysis identifying the parameter or parameters with the most influence on the

  8. Implementation of a methodology to perform the uncertainty and sensitivity analysis of the control rod drop in a BWR

    International Nuclear Information System (INIS)

    Reyes F, M. del C.

    2015-01-01

    A methodology to perform uncertainty and sensitivity analysis for the cross sections used in a Trace/PARCS coupled model for a control rod drop transient of a BWR-5 reactor was implemented with the neutronics code PARCS. A model of the nuclear reactor detailing all assemblies located in the core was developed. However, the thermohydraulic model designed in Trace was a simple model, where one channel representing all the types of assemblies located in the core, it was located inside a simple vessel model and boundary conditions were established. The thermohydraulic model was coupled with the neutronics model, first for the steady state and then a Control Rod Drop (CRD) transient was performed, in order to carry out the uncertainty and sensitivity analysis. To perform the analysis of the cross sections used in the Trace/PARCS coupled model during the transient, Probability Density Functions (PDFs) were generated for the 22 parameters cross sections selected from the neutronics parameters that PARCS requires, thus obtaining 100 different cases for the Trace/PARCS coupled model, each with a database of different cross sections. All these cases were executed with the coupled model, therefore obtaining 100 different outputs for the CRD transient with special emphasis on 4 responses per output: 1) The reactivity, 2) the percentage of rated power, 3) the average fuel temperature and 4) the average coolant density. For each response during the transient an uncertainty analysis was performed in which the corresponding uncertainty bands were generated. With this analysis it is possible to observe the results ranges of the responses chose by varying the uncertainty parameters selected. This is very useful and important for maintaining the safety in the nuclear power plants, also to verify if the uncertainty band is within of safety margins. The sensitivity analysis complements the uncertainty analysis identifying the parameter or parameters with the most influence on the

  9. Aluminum nitride coatings using response surface methodology to optimize the thermal dissipated performance of light-emitting diode modules

    Science.gov (United States)

    Jean, Ming-Der; Lei, Peng-Da; Kong, Ling-Hua; Liu, Cheng-Wu

    2018-05-01

    This study optimizes the thermal dissipation ability of aluminum nitride (AlN) ceramics to increase the thermal performance of light-emitting diode (LED) modulus. AlN powders are deposited on heat sink as a heat interface material, using an electrostatic spraying process. The junction temperature of the heat sink is developed by response surface methodology based on Taguchi methods. In addition, the structure and properties of the AlN coating are examined using X-ray photoelectron spectroscopy (XPS). In the XPS analysis, the AlN sub-peaks are observed at 72.79 eV for Al2p and 398.88 eV for N1s, and an N1s sub-peak is assigned to N-O at 398.60eV and Al-N bonding at 395.95eV, which allows good thermal properties. The results have shown that the use of AlN ceramic material on a heat sink can enhance the thermal performance of LED modules. In addition, the percentage error between the predicted and experimental results compared the quadric model with between the linear and he interaction models was found to be within 7.89%, indicating that it was a good predictor. Accordingly, RSM can effectively enhance the thermal performance of an LED, and the beneficial heat dissipation effects for AlN are improved by electrostatic spraying.

  10. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  11. Mercury Continuous Emmission Monitor Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani

    2009-03-12

    Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma

  12. Physiotherapy ultrasound calibrations

    International Nuclear Information System (INIS)

    Gledhill, M.

    1996-01-01

    Calibration of physiotherapy ultrasound equipment has long been a problem. Numerous surveys around the world over the past 20 years have all found that only a low percentage of the units tested had an output within 30% of that indicatd. In New Zealand, a survey carried out by the NRL in 1985 found that only 24% had an output, at the maximum setting, within + or - 20% of that indicated. The present performance Standard for new equipment (NZS 3200.2.5:1992) requires that the measured output should not deviate from that indicated by more than + or - 30 %. This may be tightened to + or - 20% in the next few years. Any calibration is only as good as the calibration equipment. Some force balances can be tested with small weights to simulate the force exerted by an ultrasound beam, but with others this is not possible. For such balances, testing may only be feasible with a calibrated source which could be used like a transfer standard. (author). 4 refs., 3 figs

  13. Using performance assessment for radioactive waste disposal decision making -- implementation of the methodology into the third performance assessment iteration of the Greater Confinement Disposal site

    International Nuclear Information System (INIS)

    Gallegos, D.P.; Conrad, S.H.; Baer, T.A.

    1993-01-01

    The US Department of Energy is responsible for the disposal of a variety of radioactive wastes. Some of these wastes are prohibited from shallow land burial and also do not meet the waste acceptance criteria for proposed waste repositories at the Waste Isolation Pilot Plant (WIPP) and Yucca Mountain. These have been termed ''special-case'' waste and require an alternative disposal method. From 1984 to 1989, the Department of Energy disposed of a small quantity of special-case transuranic wastes at the Greater Confinement Disposal (GCD) site at the Nevada Test Site. In this paper, an iterative performance assessment is demonstrated as a useful decision making tool in the overall compliance assessment process for waste disposal. The GCD site has been used as the real-site implementation and test of the performance assessment approach. Through the first two performance assessment iterations for the GCD site, and the transition into the third, we demonstrate how the performance assessment methodology uses probabilistic risk concepts to guide affective decisions about site characterization activities and how it can be used as a powerful tool in bringing compliance decisions to closure

  14. Isotopic exchange on solid-phase micro extraction fiber in sediment under stagnant conditions: Implications for field application of performance reference compound calibration.

    Science.gov (United States)

    Bao, Lian-Jun; Wu, Xiaoqin; Jia, Fang; Zeng, Eddy Y; Gan, Jay

    2016-08-01

    An overlooked issue for field application of in situ performance reference compound (PRC) calibration methods is the validity of the assumption that both the sorption of a target compound and desorption of its corresponding PRC follow the first-order kinetics with the same rate constants under stagnant conditions. In the present study, disposable polydimethylsiloxane fibers of 2 sizes (7 and 35 µm) impregnated with 8 (13) C-labeled or deuterated PRCs were statically deployed into different marine sediments, from which the kinetics for sorption of the target compounds and desorption of the PRCs were characterized. Nonsymmetrical profiles were observed for exchange of the target analytes and their corresponding PRCs in sediment under stagnant conditions. The hysteretic desorption of PRCs in the kinetic regime may be ascribed to the low chemical potential between the fiber and sediment porewater, which reflects the inability of water molecules to rapidly diffuse through sediment to solvate the PRCs in the aqueous layer around the fiber surface. A moderate correlation (r = 0.77 and r = 0.57, p < 0.05 for both regressions) between the PRC-calibrated equilibrium concentrations of 1,1-dichloro-2,2-bis-(chlorophenyl) ethylene (p,p'-DDE) and polychlorinated biphenyl (PCB)-153 and the lipid normalized levels in worms (Neanthes arenaceodentata) was obtained in co-exposure tests under simulating field conditions, probably resulting from slightly overestimated bioavailability because of the hysteretic desorption of PRCs and toxic effects. Environ Toxicol Chem 2016;35:1978-1985. © 2015 SETAC. © 2015 SETAC.

  15. The development and calibration of a physical model to assist in optimising the hydraulic performance and design of maturation ponds.

    Science.gov (United States)

    Aldana, G J; Lloyd, B J; Guganesharajah, K; Bracho, N

    2005-01-01

    A physical and a computational fluid dynamic (CFD) model (HYDRO-3D) were developed to simulate the effects of novel maturation pond configurations, and critical environmental factors (wind speed and direction) on the hydraulic efficiency (HE) of full-scale maturation ponds. The aims of the study were to assess the reliability of the physical model and convergence with HYDRO-3D, as tools for assessing and predicting best hydraulic performance of ponds. The physical model of the open ponds was scaled to provide a similar nominal retention time (NRT) of 52 hours. Under natural conditions, with a variable prevailing westerly wind opposite to the inlet, a rhodamine tracer study on the full-scale prototype pond produced a mean hydraulic retention time (MHRT) of 18.5 hours (HE = 35.5%). Simulations of these wind conditions, but with constant wind speed and direction in both the physical model and HYDRO-3D, produced a higher MHRT of 21 hours in both models and an HE of 40.4%. In the absence of wind tracer studies in the open pond physical model revealed incomplete mixing with peak concentrations leaving the model in several hours, but an increase in MHRT to 24.5-28 hours (HE = 50.2-57.1%). Although wind blowing opposite to the inlet flow increases dispersion (mixing), it reduced hydraulic performance by 18-25%. Much higher HE values were achieved by baffles (67-74%) and three channel configurations (69-92%), compared with the original open pond configuration. Good agreement was achieved between the two models where key environmental and flow parameters can be controlled and set, but it is difficult to accurately simulate full-scale works conditions due to the unpredictability of natural hourly and daily fluctuation in these parameters.

  16. Calibration methods for the Hargreaves-Samani equation

    Directory of Open Access Journals (Sweden)

    Lucas Borges Ferreira

    Full Text Available ABSTRACT The estimation of the reference evapotranspiration is an important factor for hydrological studies, design and management of irrigation systems, among others. The Penman Monteith equation presents high precision and accuracy in the estimation of this variable. However, its use becomes limited due to the large number of required meteorological data. In this context, the Hargreaves-Samani equation could be used as alternative, although, for a better performance a local calibration is required. Thus, the aim was to compare the calibration process of the Hargreaves-Samani equation by linear regression, by adjustment of the coefficients (A and B and exponent (C of the equation and by combinations of the two previous alternatives. Daily data from 6 weather stations, located in the state of Minas Gerais, from the period 1997 to 2016 were used. The calibration of the Hargreaves-Samani equation was performed in five ways: calibration by linear regression, adjustment of parameter “A”, adjustment of parameters “A” and “C”, adjustment of parameters “A”, “B” and “C” and adjustment of parameters “A”, “B” and “C” followed by calibration by linear regression. The performances of the models were evaluated based on the statistical indicators mean absolute error, mean bias error, Willmott’s index of agreement, correlation coefficient and performance index. All the studied methodologies promoted better estimations of reference evapotranspiration. The simultaneous adjustment of the empirical parameters “A”, “B” and “C” was the best alternative for calibration of the Hargreaves-Samani equation.

  17. LANL MTI calibration team experience

    Science.gov (United States)

    Bender, Steven C.; Atkins, William H.; Clodius, William B.; Little, Cynthia K.; Christensen, R. Wynn

    2004-01-01

    The Multispectral Thermal Imager (MTI) was designed as an imaging radiometer with absolute calibration requirements established by Department of Energy (DOE) mission goals. Particular emphasis was given to water surface temperature retrieval using two mid wave and three long wave infrared spectral bands, the fundamental requirement was a surface temperature determination of 1K at the 68% confidence level. For the ten solar reflective bands a one-sigma radiometric performance goal of 3% was established. In order to address these technical challenges a calibration facility was constructed containing newly designed sources that were calibrated at NIST. Additionally, the design of the payload and its onboard calibration system supported post launch maintenance and update of the ground calibration. The on-orbit calibration philosophy also included vicarious techniques using ocean buoys, playas and other instrumented sites; these became increasingly important subsequent to an electrical failure which disabled the onboard calibration system. This paper offers various relevant lessons learned in the eight-year process of reducing to practice the calibration capability required by the scientific mission. The discussion presented will include observations pertinent to operational and procedural issues as well as hardware experiences; the validity of some of the initial assumptions will also be explored.

  18. Technical guidelines for personnel dosimetry calibrations

    International Nuclear Information System (INIS)

    Roberson, P.L.; Fox, R.A.; Hadley, R.T.; Holbrook, K.L.; Hooker, C.D.; McDonald, J.C.

    1983-01-01

    A base of technical information has been acquire and used to evaluate the calibration, design, and performance of selected personnel systems in use at Department of Energy (DOE) facilites. A technical document was prepared to guide DOE and DOE contractors in selecting and evaluating personnel dosimetry systems and calibration. A parallel effort was initiated to intercompare the adiological calibrations standards used to calibrate DOE personnel dosimeters

  19. Methodology for predicting the characteristics and performance of different PET camera designs: The choice of figures of merit

    International Nuclear Information System (INIS)

    Deconinck, F.; Defrise, M.; Kuyk, S.; Bossuyt, A.

    1985-01-01

    In order to compare different PET camera designs (ring and planar geometry), this paper proposes ''figures of merit which allow questions such as ''Is it better, given an particular design, to achieve a coincidence rate of 10 kHz with 5% randoms, or a 20 kHz rate with 10% randoms?'' to be answered. The authors propose a methodology based on information theory. The image is the three dimensional distribution which conveys the information to the human or electronic observer. The image is supposed to consist of discrete image elements (voxels) of uniform size, and each characterized by a coincidence density. The performance of a non-ideal imager can be studied by evaluating the effect on the visibility surface: the number of events per image element will be affected by the limited efficiency, geometrical acceptance angle, temporal resolution, energy resolution. The authors show that the effect of the degradations can be introduced in the formula by replacing the number of true coincidences by an effective number of coincidences. The non-ideal imager will distort the visibility surface, so the authors compare the performance of different PET cameras by comparing the different distortions which they induce and hence their ability to detect the information present in study objects

  20. Measuring performance in off-patent drug markets: a methodological framework and empirical evidence from twelve EU Member States.

    Science.gov (United States)

    Kanavos, Panos

    2014-11-01

    This paper develops a methodological framework to help evaluate the performance of generic pharmaceutical policies post-patent expiry or after loss of exclusivity in non-tendering settings, comprising five indicators (generic availability, time delay to and speed of generic entry, number of generic competitors, price developments, and generic volume share evolution) and proposes a series of metrics to evaluate performance. The paper subsequently tests this framework across twelve EU Member States (MS) by using IMS data on 101 patent expired molecules over the 1998-2010 period. Results indicate that significant variation exists in generic market entry, price competition and generic penetration across the study countries. Size of a geographical market is not a predictor of generic market entry intensity or price decline. Regardless of geographic or product market size, many off patent molecules lack generic competitors two years after loss of exclusivity. The ranges in each of the five proposed indicators suggest, first, that there are numerous factors--including institutional ones--contributing to the success of generic entry, price decline and market penetration and, second, MS should seek a combination of supply and demand-side policies in order to maximise cost-savings from generics. Overall, there seems to be considerable potential for faster generic entry, uptake and greater generic competition, particularly for molecules at the lower end of the market. Copyright © 2014. Published by Elsevier Ireland Ltd.

  1. Optimization of SPECT calibration for quantification of images applied to dosimetry with iodine-131

    International Nuclear Information System (INIS)

    Carvalho, Samira Marques de

    2018-01-01

    SPECT systems calibration plays an essential role in the accuracy of the quantification of images. In this work, in its first stage, an optimized SPECT calibration method was proposed for 131 I studies, considering the partial volume effect (PVE) and the position of the calibration source. In the second stage, the study aimed to investigate the impact of count density and reconstruction parameters on the determination of the calibration factor and the quantification of the image in dosimetry studies, considering the reality of clinical practice in Brazil. In the final step, the study aimed evaluating the influence of several factors in the calibration for absorbed dose calculation using Monte Carlo simulations (MC) GATE code. Calibration was performed by determining a calibration curve (sensitivity versus volume) obtained by applying different thresholds. Then, the calibration factors were determined with an exponential function adjustment. Images were performed with high and low counts densities for several source positions within the simulator. To validate the calibration method, the calibration factors were used for absolute quantification of the total reference activities. The images were reconstructed adopting two approaches of different parameters, usually used in patient images. The methodology developed for the calibration of the tomographic system was easier and faster to implement than other procedures suggested to improve the accuracy of the results. The study also revealed the influence of the location of the calibration source, demonstrating better precision in the absolute quantification considering the location of the target region during the calibration of the system. The study applied in the Brazilian thyroid protocol suggests the revision of the calibration of the SPECT system, including different positions for the reference source, besides acquisitions considering the Signal to Noise Ratio (SNR) of the images. Finally, the doses obtained with the

  2. Electrical performance verification methodology for large reflector antennas: based on the P-band SAR payload of the ESA BIOMASS candidate mission

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Kim, Oleksiy S.; Nielsen, Jeppe Majlund

    2013-01-01

    pattern and gain of the entire antenna including support and satellite structure with an appropriate computational software. A preliminary investigation of the proposed methodology was carried out by performing extensive simulations of different verification approaches. The experimental validation......In this paper, an electrical performance verification methodology for large reflector antennas is proposed. The verification methodology was developed for the BIOMASS P-band (435 MHz) synthetic aperture radar (SAR), but can be applied to other large deployable or fixed reflector antennas for which...... the verification of the entire antenna or payload is impossible. The two-step methodology is based on accurate measurement of the feed structure characteristics, such as complex radiation pattern and radiation efficiency, with an appropriate Measurement technique, and then accurate calculation of the radiation...

  3. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    Energy Technology Data Exchange (ETDEWEB)

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs.

  4. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    International Nuclear Information System (INIS)

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs

  5. Design, manufacture, and calibration of infrared radiometric blackbody sources

    International Nuclear Information System (INIS)

    Byrd, D.A.; Michaud, F.D.; Bender, S.C.

    1996-04-01

    A Radiometric Calibration Station (RCS) is being assembled at the Los Alamos National Laboratories (LANL) which will allow for calibration of sensors with detector arrays having spectral capability from about 0.4-15 μm. The configuration of the LANL RCS. Two blackbody sources have been designed to cover the spectral range from about 3-15 μm, operating at temperatures ranging from about 180-350 K within a vacuum environment. The sources are designed to present a uniform spectral radiance over a large area to the sensor unit under test. The thermal uniformity requirement of the blackbody cavities has been one of the key factors of the design, requiring less than 50 mK variation over the entire blackbody surface to attain effective emissivity values of about 0.999. Once the two units are built and verified to the level of about 100 mK at LANL, they will be sent to the National Institute of Standards and Technology (NIST), where at least a factor of two improvement will be calibrated into the blackbody control system. The physical size of these assemblies will require modifications of the existing NIST Low Background Infrared (LBIR) Facility. LANL has constructed a bolt-on addition to the LBIR facility that will allow calibration of our large aperture sources. Methodology for attaining the two blackbody sources at calibrated levels of performance equivalent to present state of the art will be explained in the following

  6. A multimedia exposure assessment methodology for evaluating the performance of the design of structures containing chemical and radioactive wastes

    International Nuclear Information System (INIS)

    Stephanatos, B.N.; Molholt, B.; Walter, K.P.; MacGregor, A.

    1991-01-01

    The objectives of this work are to develop a multimedia exposure assessment methodology for the evaluation of existing and future design of structures containing chemical and radioactive wastes and to identify critical parameters for design optimization. The designs are evaluated in terms of their compliance with various federal and state regulatory requirements. Evaluation of the performance of a particular design is presented within the scope of a given exposure pathway. An exposure pathway has four key components: (1) a source and mechanism of chemical release, (2) a transport medium; (3) a point of exposure; and (4) a route of exposure. The first step in the analysis is the characterization of the waste source behavior. The rate and concentration of releases from the source are evaluated using appropriate mathematical models. The migration of radionuclides and chemicals is simulated through each environmental medium to the exposure point. The total exposure to the potential receptor is calculated, and an estimate of the health effects of the exposure is made. Simulation of the movement of radionuclides and chemical wastes from the source to the receptor point includes several processes. If the predicted human exposure to contaminants meets the performance criteria, the design has been validated. Otherwise the structure design is improved to meet the performance criteria. A phased modeling approach is recommended at a particular mixed waste site. A relatively simple model is initially used to pinpoint critical fate and transport processes and design parameters. The second phase of the modeling effort involves the use of more complex and resource intensive fate and transport models. This final step in the modeling process provides more accurate estimates of contaminant concentrations at the point of exposure. Thus the human dose is more accurately predicted, providing better design validation

  7. Methodology developed at the CEA/IPSN for logn term performance assessment of nuclear waste repositories in geological formations

    International Nuclear Information System (INIS)

    Raimbault, P.; Lewi, J.

    1985-05-01

    The CEA/ISPN is currently developing a methodology for safety evaluation of disposal site projects in granite, clay and bedded salt, host rocks formations. In the Institute of Protection and Nuclear Safety, the Department of Safety Analysis (DAS) is responsible for the coordination of the modeling effort which is performed in several specialized groups. The models are commissionned and utilized at the IPSN for specific safety evaluations. They are improved as needed and validated through international exercices (INTRACOIN-HYDROCOIN-ATKINS) and experimental programs. The DAS develops as well a global performance assessment code named MELODIE which structure allows to couple the individual models. This code participates to international joint studies such as PAGIS, in order to test its ability to model specific sites. This should help to control the adequation of the individual models to the risk assessment evaluation in order to insure the availability of specific data and to identify the most sensitive parameters. This approach should allow to coordinate the action between experimentation, code development and safety rules determination in order to be ready to perform safety assessment on chosen sites. The current status of the different aspects of this work is presented. The model development concerns mainly: transport, hydrogeology, source term, dose calculation and sensitivity studies. Its connection with data collection and model validation is stressed in the field of source modeling, hydrogeology, geochemistry and geoprospective. The description of the first version of MELODIE is presented. Some results of the interactive evaluation of the source term, the groundwater flow and the transport of radionuclides in a granite site are presented as well

  8. Using a model of the performance measures in Soft Systems Methodology (SSM) to take action: a case study in health care

    NARCIS (Netherlands)

    Kotiadis, K.; Tako, A.; Rouwette, E.A.J.A.; Vasilakis, C.; Brennan, J.; Gandhi, P.; Wegstapel, H.; Sagias, F.; Webb, P.

    2013-01-01

    This paper uses a case study of a multidisciplinary colorectal cancer team in health care to explain how a model of performance measures can lead to debate and action in Soft System Methodology (SSM). This study gives a greater emphasis and role to the performance measures than currently given in

  9. SURF Model Calibration Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.

  10. Automated Attitude Sensor Calibration: Progress and Plans

    Science.gov (United States)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  11. Proposal for evaluation methodology on impact resistant performance and construction method of tornado missile protection net structure

    International Nuclear Information System (INIS)

    Namba, Kosuke; Shirai, Koji

    2014-01-01

    In nuclear power plants, the necessity of the Tornado Missile Protection Structure is becoming a technical key issue. Utilization of the net structure seems to be one of the realistic counter measures from the point of the view of the mitigation wind and seismic loads. However, the methodology for the selection of the net suitable materials, the energy absorption design method and the construction method are not sufficiently established. In this report, three materials (high-strength metal mesh, super strong polyethylene fiber net and steel grating) were selected for the candidate material and the material screening tests, the energy absorption tests by free drop test using the heavy weight and the impact tests with the small diameter missile. As a result, high-strength metal mesh was selected as a suitable material for tornado missile protection net structure. Moreover, the construction method to obtain the good energy absorption performance of the material and the practical design method to estimate the energy absorption of the high-strength metal mesh under tornado missile impact load were proposed. (author)

  12. Attention-deficit/hyperactivity disorder and phonological working memory: Methodological variability affects clinical and experimental performance metrics.

    Science.gov (United States)

    Tarle, Stephanie J; Alderson, R Matt; Patros, Connor H G; Lea, Sarah E; Hudec, Kristen L; Arrington, Elaine F

    2017-05-01

    Despite promising findings in extant research that suggest impaired working memory (WM) serves as a central neurocognitive deficit or candidate endophenotype of attention-deficit/hyperactivity disorder (ADHD), findings from translational research have been relatively underwhelming. This study aimed to explicate previous equivocal findings by systematically examining the effect of methodological variability on WM performance estimates across experimental and clinical WM measures. Age-matched boys (ages 8-12 years) with (n = 20) and without (n = 20) ADHD completed 1 experimental (phonological) and 2 clinical (digit span, letter-number sequencing) WM measures. The use of partial scoring procedures, administration of greater trial numbers, and high central executive demands yielded moderate-to-large between-groups effect sizes. Moreover, the combination of these best-case procedures, compared to worst-case procedures (i.e., absolute scoring, administration of few trials, use of discontinue rules, and low central executive demands), resulted in a 12.5% increase in correct group classification. Collectively, these findings explain inconsistent ADHD-related WM deficits in previous reports, and highlight the need for revised clinical measures that utilize best-case procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. State-of-the art comparability of corrected emission spectra. 2. Field laboratory assessment of calibration performance using spectral fluorescence standards.

    Science.gov (United States)

    Resch-Genger, Ute; Bremser, Wolfram; Pfeifer, Dietmar; Spieles, Monika; Hoffmann, Angelika; DeRose, Paul C; Zwinkels, Joanne C; Gauthier, François; Ebert, Bernd; Taubert, R Dieter; Voigt, Jan; Hollandt, Jörg; Macdonald, Rainer

    2012-05-01

    In the second part of this two-part series on the state-of-the-art comparability of corrected emission spectra, we have extended this assessment to the broader community of fluorescence spectroscopists by involving 12 field laboratories that were randomly selected on the basis of their fluorescence measuring equipment. These laboratories performed a reference material (RM)-based fluorometer calibration with commercially available spectral fluorescence standards following a standard operating procedure that involved routine measurement conditions and the data evaluation software LINKCORR developed and provided by the Federal Institute for Materials Research and Testing (BAM). This instrument-specific emission correction curve was subsequently used for the determination of the corrected emission spectra of three test dyes, X, QS, and Y, revealing an average accuracy of 6.8% for the corrected emission spectra. This compares well with the relative standard uncertainties of 4.2% for physical standard-based spectral corrections demonstrated in the first part of this study (previous paper in this issue) involving an international group of four expert laboratories. The excellent comparability of the measurements of the field laboratories also demonstrates the effectiveness of RM-based correction procedures.

  14. Modification, calibration, and performance of the Ultra-High Sensitivity Aerosol Spectrometer for particle size distribution and volatility measurements during the Atmospheric Tomography Mission (ATom) airborne campaign

    Science.gov (United States)

    Kupc, Agnieszka; Williamson, Christina; Wagner, Nicholas L.; Richardson, Mathews; Brock, Charles A.

    2018-01-01

    Atmospheric aerosol is a key component of the chemistry and climate of the Earth's atmosphere. Accurate measurement of the concentration of atmospheric particles as a function of their size is fundamental to investigations of particle microphysics, optical characteristics, and chemical processes. We describe the modification, calibration, and performance of two commercially available, Ultra-High Sensitivity Aerosol Spectrometers (UHSASs) as used on the NASA DC-8 aircraft during the Atmospheric Tomography Mission (ATom). To avoid sample flow issues related to pressure variations during aircraft altitude changes, we installed a laminar flow meter on each instrument to measure sample flow directly at the inlet as well as flow controllers to maintain constant volumetric sheath flows. In addition, we added a compact thermodenuder operating at 300 °C to the inlet line of one of the instruments. With these modifications, the instruments are capable of making accurate (ranging from 7 % for Dp 0.13 µm), precise ( 1000 to 225 hPa, while simultaneously providing information on particle volatility.We assessed the effect of uncertainty in the refractive index (n) of ambient particles that are sized by the UHSAS assuming the refractive index of ammonium sulfate (n = 1.52). For calibration particles with n between 1.44 and 1.58, the UHSAS diameter varies by +4/-10 % relative to ammonium sulfate. This diameter uncertainty associated with the range of refractive indices (i.e., particle composition) translates to aerosol surface area and volume uncertainties of +8.4/-17.8 and +12.4/-27.5 %, respectively. In addition to sizing uncertainty, low counting statistics can lead to uncertainties of 1000 cm-3.Examples of thermodenuded and non-thermodenuded aerosol number and volume size distributions as well as propagated uncertainties are shown for several cases encountered during the ATom project. Uncertainties in particle number concentration were limited by counting statistics

  15. A Novel Performance Framework and Methodology to Analyze the Impact of 4D Trajectory Based Operations in the Future Air Traffic Management System

    OpenAIRE

    Ruiz, Sergio; Lopez Leones, Javier; Ranieri, Andrea

    2018-01-01

    The introduction of new Air Traffic Management (ATM) concepts such as Trajectory Based Operations (TBO) may produce a significant impact in all performance areas, that is, safety, capacity, flight efficiency, and others. The performance framework in use today has been tailored to the operational needs of the current ATM system and must evolve to fulfill the new needs and challenges brought by the TBO content. This paper presents a novel performance assessment framework and methodology adapted...

  16. Use of Balance Calibration Certificate to Calculate the Errors of Indication and Measurement Uncertainty in Mass Determinations Performed in Medical Laboratories

    Directory of Open Access Journals (Sweden)

    Adriana VÂLCU

    2011-09-01

    Full Text Available Based on the reference document, the article proposes the way to calculate the errors of indication and associated measurement uncertainties, by resorting to the general information provided by the calibration certificate of a balance (non-automatic weighing instruments, shortly NAWI used in medical field. The paper may be also considered a useful guideline for: operators working in laboratories accredited in medical (or other various fields where the weighing operations are part of their testing activities; test houses, laboratories, or manufacturers using calibrated non-automatic weighing instruments for measurements relevant for the quality of production subject to QM requirements (e.g. ISO 9000 series, ISO 10012, ISO/IEC 17025; bodies accrediting laboratories; accredited laboratories for the calibration of NAWI. Article refers only to electronic weighing instruments having maximum capacity up to 30 kg. Starting from the results provided by a calibration certificate it is presented an example of calculation.

  17. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ghezzehej, T.

    2004-01-01

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency

  18. A calibration rig for multi-component internal strain gauge balance using the new design-of-experiment (DOE) approach

    Science.gov (United States)

    Nouri, N. M.; Mostafapour, K.; Kamran, M.

    2018-02-01

    In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.

  19. Using MCNP for in-core instrument calibration in CANDU

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, D.C. [Point Lepreau Generating Station, NB Power, Lepreau, New Brunswick (Canada); Anghel, V.N.P.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    2002-07-01

    The calibration of in-core instruments is important for safe and economical CANDU operation. However, in-core detectors are not normally suited to bench calibration procedures. This paper describes the use and validation of detailed neutron transport calculations for the purpose of calibrating the response of in-core neutron flux detectors. The Monte-Carlo transport code, MCNP, was used to model the thermal neutron flux distribution in the region around self-powered in-core flux detectors (ICFDs), and in the vicinity of the calandria edge. The ICFD model was used to evaluate the reduction in signal of a given detector (the 'detector shading factor') due to neutron absorption in surrounding materials, detectors, and lead-cables. The calandria edge model was used to infer the accuracy of the calandria edge position from flux scans performed by AECL's traveling flux detector (TFD) system. The MCNP results were checked against experimental results on ICFDs, and also against shading factors computed by other means. The use of improved in-core detector calibration factors obtained by this new methodology will improve the accuracy of spatial flux control performance in CANDU-6 reactors. The accurate determination of TFD based calandria edge position is useful in the quantitative measurement of changes in in-core component dimensions and position due to aging, such as pressure tube sag. (author)

  20. The Observability Calibration Test Development Framework

    Energy Technology Data Exchange (ETDEWEB)

    Endicott-Popovsky, Barbara E.; Frincke, Deborah A.

    2007-06-20

    Abstract— Formal standards, precedents, and best practices for verifying and validating the behavior of low layer network devices used for digital evidence-collection on networks are badly needed— initially so that these can be employed directly by device owners and data users to document the behaviors of these devices for courtroom presentation, and ultimately so that calibration testing and calibration regimes are established and standardized as common practice for both vendors and their customers [1]. The ultimate intent is to achieve a state of confidence in device calibration that allows the network data gathered by them to be relied upon by all parties in a court of law. This paper describes a methodology for calibrating forensic-ready low layer network devices based on the Flaw Hypothesis Methodology [2,3].