WorldWideScience

Sample records for volume calibration methodology

  1. Generic methodology for calibrating profiling nacelle lidars

    DEFF Research Database (Denmark)

    Borraccino, Antoine; Courtney, Michael; Wagner, Rozenn

    is calibrated rather than a reconstructed parameter. This contribution presents a generic methodology to calibrate profiling nacelle-mounted lidars. The application of profiling lidars to wind turbine power performance and corresponding need for calibration procedures is introduced in relation to metrological...... standards. Further, two different calibration procedure concepts are described along with their strengths and weaknesses. The main steps of the generic methodology are then explained and illustrated by calibration results from two types of profiling lidars. Finally, measurement uncertainty assessment...

  2. Teaching Camera Calibration by a Constructivist Methodology

    Science.gov (United States)

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  3. Calibration methodology and performance characterization of a polarimetric hyperspectral imager

    Science.gov (United States)

    Holder, Joel G.; Martin, Jacob A.; Pitz, Jeremey; Pezzaniti, Joseph L.; Gross, Kevin C.

    2014-05-01

    Polarimetric hyperspectral imaging (P-HSI) has the potential to improve target detection, material identification, and background characterization over conventional hyperspectral imaging and polarimetric imaging. To fully exploit the spectro-polarimetric signatures captured by such an instrument, a careful calibration process is required to remove the spectrally- and polarimetrically-dependent system response (gain). Calibration of instruments operating in the long-wave infrared (LWIR, 8μm to 12 μm) is further complicated by the polarized spectral radiation generated within the instrument (offset). This paper presents a calibration methodology developed for a LWIR Telops Hyper-Cam modified for polarimetry by replacing the entrance window with a rotatable holographic wire-grid polarizer (4000 line/mm, ZnSe substrate, 350:1 extinction ratio). A standard Fourier-transform spectrometer (FTS) spectro-radiometric calibration is modified to include a Mueller-matrix approach to account for polarized transmission through and polarized selfemission from each optical interface. It is demonstrated that under the ideal polarizer assumption, two distinct blackbody measurements at polarizer angles of 0°, 45°, 90°, and 135° are sufficient to calibrate the system for apparent degree-of-linear-polarization (DoLP) measurements. Noise-equivalent s1, s2, and DoLP are quantified using a wide-area blackbody. A polarization-state generator is used to determine the Mueller deviation matrix. Finally, a realistic scene involving buildings, cars, sky radiance, and natural vegetation is presented.

  4. CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.

    2003-02-27

    The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume

  5. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras

    Science.gov (United States)

    Cornic, Philippe; Illoul, Cédric; Cheminet, Adam; Le Besnerais, Guy; Champagnat, Frédéric; Le Sant, Yves; Leclaire, Benjamin

    2016-09-01

    We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data.

  6. The KamLAND Full-Volume Calibration System

    Energy Technology Data Exchange (ETDEWEB)

    KamLAND Collaboration; Berger, B. E.; Busenitz, J.; Classen, T.; Decowski, M. P.; Dwyer, D. A.; Elor, G.; Frank, A.; Freedman, S. J.; Fujikawa, B. K.; Galloway, M.; Gray, F.; Heeger, K. M.; Hsu, L.; Ichimura, K.; Kadel, R.; Keefer, G.; Lendvai, C.; McKee, D.; O' Donnell, T.; Piepke, A.; Steiner, H. M.; Syversrud, D.; Wallig, J.; Winslow, L. A.; Ebihara, T.; Enomoto, S.; Furuno, K.; Gando, Y.; Ikeda, H.; Inoue, K.; Kibe, Y.; Kishimoto, Y.; Koga, M.; Minekawa, Y.; Mitsui, T.; Nakajima, K.; Nakajima, K.; Nakamura, K.; Owada, K.; Shimizu, I.; Shimizu, Y.; Shirai, J.; Suekane, F.; Suzuki, A.; Tamae, K.; Yoshida, S.; Kozlov, A.; Murayama, H.; Grant, C.; Leonard, D. S.; Luk, K.-B.; Jillings, C.; Mauger, C.; McKeown, R. D.; Zhang, C.; Lane, C. E.; Maricic, J.; Miletic, T.; Batygov, M.; Learned, J. G.; Matsuno, S.; Pakvasa, S.; Foster, J.; Horton-Smith, G. A.; Tang, A.; Dazeley, S.; Downum, K. E.; Gratta, G.; Tolich, K.; Bugg, W.; Efremenko, Y.; Kamyshkov, Y.; Perevozchikov, O.; Karwowski, H. J.; Markoff, D. M.; Tornow, W.; Piquemal, F.; Ricol, J.-S.

    2009-03-05

    We have successfully built and operated a source deployment system for the KamLAND detector. This system was used to position radioactive sources throughout the delicate 1-kton liquid scintillator volume, while meeting stringent material cleanliness, material compatibility, and safety requirements. The calibration data obtained with this device were used to fully characterize detector position and energy reconstruction biases. As a result, the uncertainty in the size of the detector fiducial volume was reduced by a factor of two. Prior to calibration with this system, the fiducial volume was the largest source of systematic uncertainty in measuring the number of antineutrinos detected by KamLAND. This paper describes the design, operation and performance of this unique calibration system.

  7. Syringe calibration factors and volume correction factors for the NPL secondary standard radionuclide calibrator

    CERN Document Server

    Tyler, D K

    2002-01-01

    The activity assay of a radiopharmaceutical administration to a patient is normally achieved via the use of a radionuclide calibrator. Because of the different geometries and elemental compositions between plastic syringes and glass vials, the calibration factors for syringes may well be significantly different from those for the glass containers. The magnitude of these differences depends on the energies of the emitted photons. For some radionuclides variations have been observed of 70 %, it is therefore important to recalibrate for syringes or use syringe calibration factors. Calibration factors and volume correction factors have been derived for the NPL secondary standard radionuclide calibrator, for a variety of commonly used syringes and needles, for the most commonly used medical radionuclide.

  8. Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development

    Science.gov (United States)

    1986-10-01

    J.E., "F-18 Composites Development Tests," N00019- 79-C-0044 (January 1981). 3. Stenberg , K.V., et al., "YAV-8B Composite Wing Development," Volumes I...Louis, MO 63166 (Attn: K. Stenberg , R. Garrett, R. Riley, J. Doerr). . . 4 MCDONNELL-DOUGLAS CORP., Long Beach, CA 90846 (Attn: J. Palmer

  9. Development of a Traceable Calibration Methodology for Solid ^sup 68^Ge/^sup 68^Ga Sources Used as a Calibration Surrogate for ^sup 18^F in Radionuclide Activity Calibrators

    National Research Council Canada - National Science Library

    Brian E Zimmerman; Jeffrey T Cessna

    2010-01-01

    We have developed a methodology for calibrating ^sup 68^Ge radioactivity content in a commercially available calibration source for activity calibrators in a way that is traceable to the national standard...

  10. Gafchromic film dosimetry: calibration methodology and error analysis

    CERN Document Server

    Crijns, Wouter; Heuvel, Frank Van den

    2011-01-01

    Purpose : To relate the physical transmittance parameters of the water equivalent Gafchromic EBT 2 Film with the delivered dose in a transparent absolute calibration protocol. The protocol should be easy to understand, easy to perform, and should be able to predict the residual dose error. Conclussions : The gafchromic EBT2 Films are properly calibrated with an accessible robust calibration protocol. The protocol largely deals with the uniformity problems of the Film. The proposed method allowed to relate the dose with the red channel transmittance using only T0, T_inf, and a dose scaling factor. Based on the local and global uniformity the red channel dose errors could be predicted to be smaller than 5%.

  11. Volume calibration for nuclear materials control: ANSI N15.19-1989 and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Liebetrau, A.M.

    1994-03-01

    Since the last IAEA International Safeguards Symposium, a revised standard for volume calibration methodology was issued in the United States. Because the new standard reflects the advent of high-precision volume measurement technology, it is significantly different from the earlier standard which it supersedes. The new standard outlines a unified data standardization model that applies to process tanks equipped with differential pressure measurement systems for determining liquid content. At the heart of the model is an algorithm to determine liquid height from pressure measurements that accounts for the major factors affecting the accuracy of those measurements. The standardization model also contains algorithms that adjust data from volumetric and gravimetric provers to a standard set of reference conditions. A key component of the standardization model is an algorithm to take account of temperature-induced dimensional changes in the tank. Improved methods for the statistical treatment of calibration data have also appeared since the last Safeguards Symposium. A statistical method of alignment has been introduced that employs a least-squares criterion to determine ``optimal`` alignment factors. More importantly, a statistical model has been proposed that yields plausible estimates of the variance of height and volume measurements when significant run-to-run differences are present in the calibration data. The new standardization model and statistical methods described here are being implemented in a portable, user-friendly software program for use by IAEA inspectors and statisticians. Perhaps these methods will eventually find their way into appropriate international standards.

  12. A methodology to calibrate pedestrian walker models using multiple objectives

    NARCIS (Netherlands)

    Campanella, M.C.; Daamen, W.; Hoogendoorn, S.P.

    2012-01-01

    The application of walker models to simulate real situations require accuracy in several traffic situations. One strategy to obtain a generic model is to calibrate the parameters in several situations using multiple-objective functions in the optimization process. In this paper, we propose a general

  13. Sandia software guidelines: Volume 5, Tools, techniques, and methodologies

    Energy Technology Data Exchange (ETDEWEB)

    1989-07-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. This volume describes software tools and methodologies available to Sandia personnel for the development of software, and outlines techniques that have proven useful within the Laboratories and elsewhere. References and evaluations by Sandia personnel are included. 6 figs.

  14. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    Energy Technology Data Exchange (ETDEWEB)

    Tarifeño-Saldivia, Ariel, E-mail: atarifeno@cchen.cl, E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo [Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago (Chile); Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago (Chile); Mayer, Roberto E. [Instituto Balseiro and Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Universidad Nacional de Cuyo, San Carlos de Bariloche R8402AGP (Argentina)

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  15. Calibration methodology for proportional counters applied to yield measurements of a neutron burst.

    Science.gov (United States)

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  16. Sensor Calibration Inter-Comparison Methodologies and Applications TO AVHRR, MODIS, AND VIIRS Observations

    Science.gov (United States)

    Xiong, Xiaoxiong; Wu, Aisheng; Cao, Changyong; Doelling, David

    2012-01-01

    As more and more satellite observations become available to the science and user community, their on-orbit calibration accuracy and consistency over time continue to be an important and challenge issue, especially in the reflective solar spectral regions. In recent years, many sensor calibration inter-comparison methodologies have been developed by different groups and applied to a range of satellite observations, aiming to the improvement of satellite instrument calibration accuracy and data quality. This paper provides an overview of different methodologies developed for inter-comparisons of A VHRR and MODIS observations, and extends their applications to the Visible-Infrared Imaging Radiometer Suite (VIIRS) instrument. The first VIIRS was launched on-board the NPP spacecraft on October 28, 2011. The VIIRS, designed with MODIS heritage, collects data in 22 spectral bands from visible (VIS) to long-wave infrared (LWIR). Like both Terra and Aqua MODIS, the VIIRS on-orbit calibration is performed using a set of on-board calibrators (OBC), Methodologies discussed in this paper include the use of well-characterized ground reference targets, near simultaneous nadir overpasses (SNO), lunar observations, and deep convective clouds (DeC). Results from long-term A VHRR and MODIS observations and initial assessment of VIIRS on-orbit calibration are presented. Current uncertainties of different methodologies and potential improvements are also discussed in this paper.

  17. Computer technology -- 1996: Applications and methodology. PVP-Volume 326

    Energy Technology Data Exchange (ETDEWEB)

    Hulbert, G.M. [ed.] [Univ. of Michigan, Ann Arbor, MI (United States); Hsu, K.H. [ed.] [Babcock and Wilcox, Barberton, OH (United States); Lee, T.W. [ed.] [FMC Corp., Santa Clara, CA (United States); Nicholas, T. [ed.] [USAF Wright Laboratory, Wright-Patterson AFB, OH (United States)

    1996-12-01

    The primary objective of the Computer Technology Committee of the ASME Pressure Vessels and Piping Division is to promote interest and technical exchange in the field of computer technology, related to the design and analysis of pressure vessels and piping. The topics included in this volume are: analysis of bolted joints; nonlinear analysis, applications and methodology; finite element analysis and applications; and behavior of materials. Separate abstracts were prepared for 23 of the papers in this volume.

  18. COMPARISON METHODOLOGIES FOR CALIBRATION OF Hp(10) PERSONAL DOSEMETERS USING ISO 4037 AND ISO 29661 STANDARDS.

    Science.gov (United States)

    Cardoso, J; Santos, L; Carvalhal, G; Oliveira, C

    2016-09-01

    The calibration of electronic personal dosemeters at the Portuguese ionizing radiation metrology laboratory uses the standard IEC 61526 for calibration methodology. This standard describes the irradiation geometry for testing and indicates that the standard ISO 4037-1, 2, 3 and 4 should be used. The ISO 4037 establishes that the reference point of test is a point in the radiation monitor, known or established, and the calibration phantom should be placed on its back in order to simulate the trunk body. Recently, ISO published another standard, the ISO 29661, that changes the reference point from the radiation monitor to the front face of the calibration phantom. The aim of this work is to present the result of the comparison of these two methodologies on personal dosemeters from five different manufacturers. The work shows differences in the Hp(10) response up to 4% resulting from the two different reference point concepts.

  19. Calibration Experiments for a Computer Vision Oyster Volume Estimation System

    Science.gov (United States)

    Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.

    2009-01-01

    Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…

  20. Calibration Experiments for a Computer Vision Oyster Volume Estimation System

    Science.gov (United States)

    Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.

    2009-01-01

    Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…

  1. Generic Methodology for Field Calibration of Nacelle-Based Wind Lidars

    OpenAIRE

    Antoine Borraccino; Michael Courtney; Rozenn Wagner

    2016-01-01

    Nacelle-based Doppler wind lidars have shown promising capabilities to assess power performance, detect yaw misalignment or perform feed-forward control. The power curve application requires uncertainty assessment. Traceable measurements and uncertainties of nacelle-based wind lidars can be obtained through a methodology applicable to any type of existing and upcoming nacelle lidar technology. The generic methodology consists in calibrating all the inputs of the wind field reconstruction algo...

  2. Simple and accurate empirical absolute volume calibration of a multi-sensor fringe projection system

    Science.gov (United States)

    Gdeisat, Munther; Qudeisat, Mohammad; AlSa`d, Mohammed; Burton, David; Lilley, Francis; Ammous, Marwan M. M.

    2016-05-01

    This paper suggests a novel absolute empirical calibration method for a multi-sensor fringe projection system. The optical setup of the projector-camera sensor can be arbitrary. The term absolute calibration here means that the centre of the three dimensional coordinates in the resultant calibrated volume coincides with a preset centre to the three-dimensional real-world coordinate system. The use of a zero-phase fringe marking spot is proposed to increase depth calibration accuracy, where the spot centre is determined with sub-pixel accuracy. Also, a new method is proposed for transversal calibration. Depth and transversal calibration methods have been tested using both single sensor and three-sensor fringe projection systems. The standard deviation of the error produced by this system is 0.25 mm. The calibrated volume produced by this method is 400 mm×400 mm×140 mm.

  3. Generic Methodology for Field Calibration of Nacelle-Based Wind Lidars

    Directory of Open Access Journals (Sweden)

    Antoine Borraccino

    2016-11-01

    Full Text Available Nacelle-based Doppler wind lidars have shown promising capabilities to assess power performance, detect yaw misalignment or perform feed-forward control. The power curve application requires uncertainty assessment. Traceable measurements and uncertainties of nacelle-based wind lidars can be obtained through a methodology applicable to any type of existing and upcoming nacelle lidar technology. The generic methodology consists in calibrating all the inputs of the wind field reconstruction algorithms of a lidar. These inputs are the line-of-sight velocity and the beam position, provided by the geometry of the scanning trajectory and the lidar inclination. The line-of-sight velocity is calibrated in atmospheric conditions by comparing it to a reference quantity based on classic instrumentation such as cup anemometers and wind vanes. The generic methodology was tested on two commercially developed lidars, one continuous wave and one pulsed systems, and provides consistent calibration results: linear regressions show a difference of ∼0.5% between the lidar-measured and reference line-of-sight velocities. A comprehensive uncertainty procedure propagates the reference uncertainty to the lidar measurements. At a coverage factor of two, the estimated line-of-sight velocity uncertainty ranges from 3.2% at 3 m · s − 1 to 1.9% at 16 m · s − 1 . Most of the line-of-sight velocity uncertainty originates from the reference: the cup anemometer uncertainty accounts for ∼90% of the total uncertainty. The propagation of uncertainties to lidar-reconstructed wind characteristics can use analytical methods in simple cases, which we demonstrate through the example of a two-beam system. The newly developed calibration methodology allows robust evaluation of a nacelle lidar’s performance and uncertainties to be established. Calibrated nacelle lidars may consequently be further used for various wind turbine applications in confidence.

  4. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

  5. Methodology for the digital calibration of analog circuits and systems with case studies

    CERN Document Server

    Pastre, Marc

    2006-01-01

    Methodology for the Digital Calibration of Analog Circuits and Systems shows how to relax the extreme design constraints in analog circuits, allowing the realization of high-precision systems even with low-performance components. A complete methodology is proposed, and three applications are detailed. To start with, an in-depth analysis of existing compensation techniques for analog circuit imperfections is carried out. The M/2+M sub-binary digital-to-analog converter is thoroughly studied, and the use of this very low-area circuit in conjunction with a successive approximations algorithm for digital compensation is described. A complete methodology based on this compensation circuit and algorithm is then proposed. The detection and correction of analog circuit imperfections is studied, and a simulation tool allowing the transparent simulation of analog circuits with automatic compensation blocks is introduced. The first application shows how the sub-binary M/2+M structure can be employed as a conventional di...

  6. Acceptance test of an activity meter to be used as reference in a calibration methodology establishment

    Energy Technology Data Exchange (ETDEWEB)

    Correa, Eduardo L.; Kuahara, Lilian T.; Potiens, Maria da Penha A., E-mail: educorrea1905@gmail.com [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2014-07-01

    The nuclear medicine is a medical physics area in which radiopharmaceuticals are used in diagnostic procedures. These radioactive elements are administered in the patient and the radiation emitted is detected by an equipment, that makes the body scan, connected to a computer software, and the image is constructed. In order to operate the nuclear medicine service must have calibrated radiation detectors. Thought, it does not exist, in Brazil, an activity meter calibration methodology, which causes many measurement uncertainties. The goal of this study is to present the acceptance test results of an activity meter to be used as reference in a new calibration methodology establishment. It was checked an activity meter Capintec, CRC-25R model, using three control sources ({sup 137}Cs, {sup 57}Co, {sup 133}Ba). The tests were based on the CNEN-NN 3.05 standard, the manufacturer manual, the TRS-454 and the TECDOC 602 and include: physical inspection, chamber voltage, zero adjustment, background response, data check and repeatability. The linearity and geometry tests could not be made, because the laboratory where the activity meter is located is not authorized to receive non-sealed radioactive sources. The equipment has presented a good behavior. All the results are in the range presented by national and international standards and the equipment is now being used in the laboratory and periodically passes through the quality control tests. (author)

  7. Left ventricular volume measurement in mice by conductance catheter: evaluation and optimization of calibration

    DEFF Research Database (Denmark)

    Nielsen, Jan Møller; Kristiansen, Steen B; Ringgaard, Steffen;

    2007-01-01

    . The dual-frequency method for estimation of parallel conductance failed to produce V(CC) that correlated with V(MRI). We conclude that selection of the calibration procedure for the CC has significant implications for the accuracy and precision of volume estimation and pressure-volume loop...

  8. Generic Methodology for Field Calibration of Nacelle-Based Wind Lidars

    DEFF Research Database (Denmark)

    Borraccino, Antoine; Courtney, Michael; Wagner, Rozenn

    2016-01-01

    by the geometry of the scanning trajectory and the lidar inclination. The line-of-sight velocity is calibrated in atmospheric conditions by comparing it to a reference quantity based on classic instrumentation such as cup anemometers and wind vanes. The generic methodology was tested on two commercially developed...... lidars, one continuous wave and one pulsed systems, and provides consistent calibration results: linear regressions show a difference of ∼0.5% between the lidar-measured and reference line-of-sight velocities. A comprehensive uncertainty procedure propagates the reference uncertainty to the lidar...... measurements. At a coverage factor of two, the estimated line-of-sight velocity uncertainty ranges from 3.2% at 3 m·s-1 to 1.9% at 16 m·s-1. Most of the line-of-sight velocity uncertainty originates from the reference: the cup anemometer uncertainty accounts for 90% of the total uncertainty. The propagation...

  9. 40 CFR 86.519-90 - Constant volume sampler calibration.

    Science.gov (United States)

    2010-07-01

    ... by EPA for both PDP (Positive Displacement Pump) and CFV (Critical Flow Venturi) are outlined below... establish the flow rate of the constant volume sampler pump. All the parameters related to the pump are simultaneously measured with the parameters related to a flowmeter which is connected in series with the...

  10. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    Science.gov (United States)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  11. Direct determination of reserpine in urine using excitationemission fluorescence combined with three-way chemometric calibration methodologies

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The concentration of reserpine in urine was directly and quantitatively measured by using the excitation-emission fluorescence (EEM) combined with threeway calibration methodologies.Two calibration methods are based on the alternating trilinear decomposition (ATLD) and the self-weighted alternating trilinear decomposition (SWATLD) algorithms,respectively.These chemometric methodologies have the second-order advantage,which is the ability to get accurate concentration estimates of interested analyte(s) even in the presence of uncalibrated interferences.The satisfactory results on spiked urine samples are obtained,when the component number was chosen to 3 (N=3) for both the methods.This experiment is easily carried out without time-consuming and complicated pretreatment.It has proved that the three-way calibration methodologies based on ATLD and SWATLD can be feasible to directly quantify the medical content of reserpine in urine.

  12. Methodology for calibration of ionization chambers for X-ray of low energy in absorbed dose to water

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, C.T.; Vivolo, V.; Potiens, M.P.A., E-mail: camila_fmedica@hotmail.com [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The beams of low energy X-ray (10 to 150 kV) are used in several places in the world to treat a wide variety of surface disorders, and between these malignancies. As in Brazil, at this moment, there is no calibration laboratory providing the control service or calibration of parallel plate ionization chambers, the aim of this project was to establish a methodology for calibration of this kind of ionization chambers at low energy X-ray beams in terms of absorbed dose to water using simulators in the LCI. (author)

  13. Innovative methodology for intercomparison of radionuclide calibrators using short half-life in situ prepared radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, P. A. [Centro de Investigação do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto, Portugal and Departamento de Física e Astronomia, Faculdade de Ciências da Universidade do Porto (Portugal); Santos, J. A. M., E-mail: joao.santos@ipoporto.min-saude.pt [Centro de Investigação do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto (Portugal); Serviço de Física Médica do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto (Portugal); Serviço de Medicina Nuclear do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto (Portugal); Instituto de Ciências Biomédicas Abel Salazar, Universidade do Porto, Porto (Portugal)

    2014-07-15

    Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as{sup 99m}Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a careful analysis of the methodology, for the case of{sup 99m}Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using {sup 99m}Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for{sup 99m}Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half-life radionuclides.

  14. Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.

    Science.gov (United States)

    Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H

    2016-03-18

    A smartphone-based tidal volume (V(T)) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference V(T) measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and V(T) to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis.

  15. In situ calibrated defocusing PTV for wall-bounded measurement volumes

    Science.gov (United States)

    Fuchs, T.; Hain, R.; Kähler, C. J.

    2016-08-01

    In many situations, 3D velocity measurements in thin (∼1 mm) but wide (∼100  ×  100 mm2) flow channels is an important task. To resolve the in-plane and out-of-plane velocity gradients properly, a precise calibration is required, since 3D measurement approaches rely strongly on the accuracy of the calibration procedure. It is likely that calibration targets do not fit domains with small depths, due to their size. Furthermore, in fields where such measurements are of interest, the accessibility of the measurement volume is often limited or even impossible. To overcome these drawbacks, this paper introduces an in situ calibrated defocusing particle tracking velocimetry approach for wall-bounded measurement domains with depths in the low millimeter range. The calibration function for the particle depth location is directly derived from the particle image geometries and their displacements between two frames. Employing only a single camera, this defocusing approach is capable of measuring the air flow between two parallel glass plates at a distance of 1 mm with an average uncertainty of 2.43% for each track, relative to the maximum velocity. A tomographic particle tracking velocimetry measurement, serving as a benchmark for the single camera technique, reaches an average uncertainty of 1.59%. Altogether, with its straightforward set-up and without requiring a calibration target, this in situ calibrated defocusing approach opens new areas of application for optical flow velocimetry. In particular, for measurement domains with small optical windows and a lack of accessibility.

  16. Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera

    Directory of Open Access Journals (Sweden)

    Bersain A. Reyes

    2016-03-01

    Full Text Available A smartphone-based tidal volume (VT estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE of 14.998% ± 5.171% (mean ± SD when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS. Experiments were performed on twelve (N = 12 healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of −0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis.

  17. Modelling of thermal hydraulics in a KAROLINA calorimeter for its calibration methodology validation

    Directory of Open Access Journals (Sweden)

    Luks Aleksandra

    2016-12-01

    Full Text Available Results of numerical calculations of heat exchange in a nuclear heating detector for nuclear reactors are presented in this paper. The gamma radiation is generated in nuclear reactor during fission and radiative capture reactions as well as radioactive decay of its products. A single-cell calorimeter has been designed for application in the MARIA research reactor in the National Centre for Nuclear Research (NCBJ in Świerk near Warsaw, Poland, and can also be used in the Jules Horowitz Reactor (JHR, which is under construction in the research centre in Cadarache, France. It consists of a cylindrical sample, which is surrounded by a gas layer, contained in a cylindrical housing. Additional calculations had to be performed before its insertion into the reactor. Within this analysis, modern computational fluid dynamics (CFD methods have been used for assessing important parameters, for example, mean surface temperature, mean volume temperature, and maximum sample (calorimeter core temperature. Results of an experiment performed at a dedicated out-of-pile calibration bench and results of numerical modelling validation are also included in this paper.

  18. A general analysis of calibrated BOLD methodology for measuring CMRO2 responses: comparison of a new approach with existing methods.

    Science.gov (United States)

    Blockley, Nicholas P; Griffeth, Valerie E M; Buxton, Richard B

    2012-03-01

    The amplitude of the BOLD response to a stimulus is not only determined by changes in cerebral blood flow (CBF) and oxygen metabolism (CMRO(2)), but also by baseline physiological parameters such as haematocrit, oxygen extraction fraction (OEF) and blood volume. The calibrated BOLD approach aims to account for this physiological variation by performing an additional calibration scan. This calibration typically consists of a hypercapnia or hyperoxia respiratory challenge, although we propose that a measurement of the reversible transverse relaxation rate, R(2)', might also be used. A detailed model of the BOLD effect was used to simulate each of the calibration experiments, as well as the activation experiment, whilst varying a number of physiological parameters associated with the baseline state and response to activation. The effectiveness of the different calibration methods was considered by testing whether the BOLD response to activation scaled by the calibration parameter combined with the measured CBF provides sufficient information to reliably distinguish different levels of CMRO(2) response despite underlying physiological variability. In addition the effect of inaccuracies in the underlying assumptions of each technique were tested, e.g. isometabolism during hypercapnia. The three primary findings of the study were: 1) The new calibration method based on R(2)' worked reasonably well, although not as well as the ideal hypercapnia method; 2) The hyperoxia calibration method was significantly worse because baseline haematocrit and OEF must be assumed, and these physiological parameters have a significant effect on the measurements; and 3) the venous blood volume change with activation is an important confounding variable for all of the methods, with the hypercapnia method being the most robust when this is uncertain.

  19. Assessment of target volume doses in radiotherapy based on the standard and measured calibration curves

    Directory of Open Access Journals (Sweden)

    Gholamreza Fallah Mohammadi

    2015-01-01

    Full Text Available Context: In radiation treatments, estimation of the dose distribution in the target volume is one of the main components of the treatment planning procedure. To estimate the dose distribution, the information of electron densities is necessary. The standard curves determined by computed tomography (CT scanner that may be different from that of other oncology centers. In this study, the changes of dose calculation due to the different calibration curves (HU-ρel were investigated. Materials and Methods: Dose values were calculated based on the standard calibration curve that was predefined for the treatment planning system (TPS. The calibration curve was also extracted from the CT images of the phantom, and dose values were calculated based on this curve. The percentage errors of the calculated values were determined. Statistical Analysis Used: The statistical analyses of the mean differences were performed using the Wilcoxon rank-sum test for both of the calibration curves. Results and Discussion: The results show no significant difference for both of the measured and standard calibration curves (HU-ρel in 6, 15, and 18 MeV energies. In Wilcoxon ranked sum nonparametric test for independent samples with P < 0.05, the equality of monitor units for both of the curves to transfer 200 cGy doses to reference points was resulted. The percentage errors of the calculated values were lower than 2% and 1.5% in 6 and 15 MeV, respectively. Conclusion: From the results, it could be concluded that the standard calibration curve could be used in TPS dose calculation accurately.

  20. [Full-field and automatic methodology of spectral calibration for PGP imaging spectrometer].

    Science.gov (United States)

    Sun, Ci; Bayanheshig; Cui, Ji-cheng; Pan, Ming-zhong; Li, Xiao-tian; Tang, Yu-guo

    2014-08-01

    In order to analyze spectral data quantitatively which is obtained by prism-grating-prism imaging spectrometer, spectral calibration is required in order to determine spectral characteristics of PGP imaging spectrometer, such as the center wavelength of every spectral channel, spectral resolution and spectral bending. A spectral calibration system of full field based on collimated monochromatic light method is designed. Spherical mirror is used to provide collimated light, and a freely sliding and rotating folding mirror is adopted to change the angle of incident light in order to realize full field and automatic calibration of imaging spectrometer. Experiments of spectral calibration have been done for PGP imaging spectrometer to obtain parameters of spectral performance, and accuracy analysis combined with the structural features of the entire spectral calibration system have been done. Analysis results indicate that spectral calibration accuracy of the calibration system reaches 0.1 nm, and the bandwidth accuracy reaches 1.3%. The calibration system has merits of small size, better commonality, high precision and so on, and because of adopting the control of automation, the additional errors which are caused by human are avoided. The calibration system can be used for spectral calibration of other imaging spectrometers whose structures are similar to PGP.

  1. Seismic hazard methodology for the Central and Eastern United States. Volume 1: methodology. Final report

    Energy Technology Data Exchange (ETDEWEB)

    McGuire, R.K.; Veneziano, D.; Toro, G.; O' Hara, T.; Drake, L.; Patwardhan, A.; Kulkarni, R.; Kenney, R.; Winkler, R.; Coppersmith, K.

    1986-07-01

    A methodology to estimate the hazard of earthquake ground motion at a site has been developed. The methodology consists of systematic procedures to characterize earthquake sources, the seismicity parameters of those sources, and functions for the attenuation of seismic energy, incorporating multiple input interpretations by earth scientists. Uncertainties reflecting permissible alternative inperpretations are quantified by use of probability logic trees and are propagated through the hazard results. The methodology is flexible and permits, for example, interpretations of seismic sources that are consistent with earth-science practice in the need to depict complexity and to accommodate alternative hypotheses. This flexibility is achieved by means of a tectonic framework interpretation from which alternative seismic sources are derived. To estimate rates of earthquake recurrence, maximum use is made of the historical earthquake database in establishing a uniform measure of earthquake size, in identifying independent events, and in detemining the completeness of the earthquake record in time, space, and magnitude. Procedures developed as part of the methodology permit relaxation of the usual assumption of homogeneous seismicity within a source and provide unbiased estimates of recurrence parameters. The methodology incorporates the Poisson-exponential earthquake recurrence model and an extensive assessment of its applicability is provided. Finally, the methodology includes procedures to aggregate hazard results from a number of separate input interpretations to obtain a best-estimate value of hazard, together with its uncertainty, at a site.

  2. Combining EEG and MEG for the reconstruction of epileptic activity using a calibrated realistic volume conductor model.

    Directory of Open Access Journals (Sweden)

    Ümit Aydin

    Full Text Available To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP and field (SEF data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data.

  3. Calibration requirements and methodology for remote sensors viewing the ocean in the visible

    Science.gov (United States)

    Gordon, Howard R.

    1987-01-01

    The calibration requirements for ocean-viewing sensors are outlined, and the present methods of effecting such calibration are described in detail. For future instruments it is suggested that provision be made for the sensor to view solar irradiance in diffuse reflection and that the moon be used as a source of diffuse light for monitoring the sensor stability.

  4. Comparison of Two Methodologies for Calibrating Satellite Instruments in the Visible and Near-Infrared

    Science.gov (United States)

    Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Butler, James J.; Schwarting, Thomas; Turpie, Kevin; Moyer, David; DeLuccia, Frank; Moeller, Christopher

    2015-01-01

    Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance responsivity in a two-step method. In the first step, the relative spectral response (RSR) of the instrument is determined using a nearly monochromatic light source such as a lamp-illuminated monochromator. These sources do not typically fill the field-of-view of the instrument nor act as calibrated sources of light. Consequently, they only provide a relative (not absolute) spectral response for the instrument. In the second step, the instrument views a calibrated source of broadband light, such as a lamp-illuminated integrating sphere. The RSR and the sphere absolute spectral radiance are combined to determine the absolute spectral radiance responsivity (ASR) of the instrument. More recently, a full-aperture absolute calibration approach using widely tunable monochromatic lasers has been developed. Using these sources, the ASR of an instrument can be determined in a single step on a wavelength-by-wavelength basis. From these monochromatic ASRs, the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as lamp-illuminated integrating spheres. In this work, the traditional broadband source-based calibration of the Suomi National Preparatory Project (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) sensor is compared with the laser-based calibration of the sensor. Finally, the impact of the new full-aperture laser-based calibration approach on the on-orbit performance of the sensor is considered.

  5. 2D-3D Registration of CT Vertebra Volume to Fluoroscopy Projection: A Calibration Model Assessment

    Directory of Open Access Journals (Sweden)

    Allen R

    2010-01-01

    Full Text Available This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1 mm for displacements parallel to the fluoroscopic plane, and of order of 10 mm for the orthogonal displacement.

  6. Enhanced recovery of unconventional gas. The methodology--Volume III (of 3 volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Kuuskraa, V. A.; Brashear, J. P.; Doscher, T. M.; Elkins, L. E.

    1979-02-01

    The methodology is described in chapters on the analytic approach, estimated natural gas production, recovery from tight gas sands, recovery from Devonian shales, recovery from coal seams, and recovery from geopressured aquifers. (JRD)

  7. Design and Verification Methodology of Boundary Conditions for Finite Volume Schemes

    Science.gov (United States)

    2012-07-01

    Finite Volume Schemes 5a. CONTRACT NUMBER In-House 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Folkner, D., Katz , A and Sankaran...July 9-13, 2012 ICCFD7-2012-1001 Design and Verification Methodology of Boundary Conditions for Finite Volume Schemes D. Folkner∗, A. Katz ∗ and V...Office (ARO), under the supervision of Dr. Frederick Ferguson. The authors would like to thank Dr. Ferguson for his continuing support of this research

  8. A comparison of statistical emulation methodologies for multi-wave calibration of environmental models.

    Science.gov (United States)

    Salter, James M; Williamson, Daniel

    2016-12-01

    Expensive computer codes, particularly those used for simulating environmental or geological processes, such as climate models, require calibration (sometimes called tuning). When calibrating expensive simulators using uncertainty quantification methods, it is usually necessary to use a statistical model called an emulator in place of the computer code when running the calibration algorithm. Though emulators based on Gaussian processes are typically many orders of magnitude faster to evaluate than the simulator they mimic, many applications have sought to speed up the computations by using regression-only emulators within the calculations instead, arguing that the extra sophistication brought using the Gaussian process is not worth the extra computational power. This was the case for the analysis that produced the UK climate projections in 2009. In this paper, we compare the effectiveness of both emulation approaches upon a multi-wave calibration framework that is becoming popular in the climate modeling community called "history matching." We find that Gaussian processes offer significant benefits to the reduction of parametric uncertainty over regression-only approaches. We find that in a multi-wave experiment, a combination of regression-only emulators initially, followed by Gaussian process emulators for refocussing experiments can be nearly as effective as using Gaussian processes throughout for a fraction of the computational cost. We also discover a number of design and emulator-dependent features of the multi-wave history matching approach that can cause apparent, yet premature, convergence of our estimates of parametric uncertainty. We compare these approaches to calibration in idealized examples and apply it to a well-known geological reservoir model.

  9. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    Science.gov (United States)

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  10. Impacts of Outer Continental Shelf (OCS) development on recreation and tourism. Volume 3. Detailed methodology

    Energy Technology Data Exchange (ETDEWEB)

    1987-04-01

    The final report for the project is presented in five volumes. This volume, Detailed Methodology Review, presents a discussion of the methods considered and used to estimate the impacts of Outer Continental Shelf (OCS) oil and gas development on coastal recreation in California. The purpose is to provide the Minerals Management Service with data and methods to improve their ability to analyze the socio-economic impacts of OCS development. Chapter II provides a review of previous attempts to evaluate the effects of OCS development and of oil spills on coastal recreation. The review also discusses the strengths and weaknesses of different approaches and presents the rationale for the methodology selection made. Chapter III presents a detailed discussion of the methods actually used in the study. The volume contains the bibliography for the entire study.

  11. RESOLVE Survey Photometry and Volume-limited Calibration of the Photometric Gas Fractions Technique

    CERN Document Server

    Eckert, Kathleen D; Stark, David V; Moffett, Amanda J; Norris, Mark A; Snyder, Elaine M; Hoversten, Erik A

    2015-01-01

    We present custom-processed UV, optical, and near-IR photometry for the RESOLVE survey, a volume-limited census of stellar, gas, and dynamical mass within two subvolumes of the nearby universe (RESOLVE-A and -B), complete down to baryonic mass ~10^9.1-9.3 Msun. In contrast to standard pipeline photometry (e.g., SDSS), our photometry uses optimal background subtraction, avoids suppressing color gradients, and includes systematic errors. With these improvements, we measure brighter magnitudes, larger radii, bluer colors, and a real increase in scatter around the red sequence. Combining stellar masses from our photometry with the RESOLVE-A HI mass census, we create volume-limited calibrations of the photometric gas fractions (PGF) technique, which predicts gas-to-stellar mass ratios (G/S) from galaxy colors and optional additional parameters. We analyze G/S-color residuals vs. potential third parameters, finding that axial ratio is the best independent and physically meaningful third parameter. We define a "modi...

  12. Optimizing the accuracy of a helical diode array dosimeter: A comprehensive calibration methodology coupled with a novel virtual inclinometer

    Energy Technology Data Exchange (ETDEWEB)

    Kozelka, Jakub; Robinson, Joshua; Nelms, Benjamin; Zhang, Geoffrey; Savitskij, Dennis; Feygelman, Vladimir [Sun Nuclear Corp., Melbourne, Florida 32940 (United States); Department of Physics, University of South Florida, Tampa, Florida 33612 (United States); Canis Lupus LLC, Sauk County, Wisconsin 53561 (United States); Division of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States); Sun Nuclear Corp., Melbourne, Florida 32940 (United States); Division of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2011-09-15

    Purpose: The goal of any dosimeter is to be as accurate as possible when measuring absolute dose to compare with calculated dose. This limits the uncertainties associated with the dosimeter itself and allows the task of dose QA to focus on detecting errors in the treatment planning (TPS) and/or delivery systems. This work introduces enhancements to the measurement accuracy of a 3D dosimeter comprised of a helical plane of diodes in a volumetric phantom. Methods: We describe the methods and derivations of new corrections that account for repetition rate dependence, intrinsic relative sensitivity per diode, field size dependence based on the dynamic field size determination, and positional correction. Required and described is an accurate ''virtual inclinometer'' algorithm. The system allows for calibrating the array directly against an ion chamber signal collected with high angular resolution. These enhancements are quantitatively validated using several strategies including ion chamber measurements taken using a ''blank'' plastic shell mimicking the actual phantom, and comparison to high resolution dose calculations for a variety of fields: static, simple arcs, and VMAT. A number of sophisticated treatment planning algorithms were benchmarked against ion chamber measurements for their ability to handle a large air cavity in the phantom. Results: Each calibration correction is quantified and presented vs its independent variable(s). The virtual inclinometer is validated by direct comparison to the gantry angle vs time data from machine log files. The effects of the calibration are quantified and improvements are seen in the dose agreement with the ion chamber reference measurements and with the TPS calculations. These improved agreements are a result of removing prior limitations and assumptions in the calibration methodology. Average gamma analysis passing rates for VMAT plans based on the AAPM TG-119 report are 98.4 and 93

  13. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    Science.gov (United States)

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  14. Quality control tests of an activity meter to be used as reference for an in situ calibration methodology

    Energy Technology Data Exchange (ETDEWEB)

    Correa, Eduardo de L.; Kuahara, Lilian T.; Potiens, Maria da Penha A., E-mail: educorrea1905@gmail.com [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    The Nuclear Medicine is a medical speciality involving the application of radioactive isotopes in diagnosis and/or treatment of disease. In order to ensure that the radiation dose applied to the patient is adequate, the radiopharmaceutical activity must be adequately measured. This work was performed to analyze the behavior of an activity meter Capintec NPL-CRC to be used as a reference for the implementation of a methodology for in situ calibration of nuclear medicine equipment. It were made the daily quality control tests, such as auto zero, background, system test, accuracy test and constancy test, and determination of repeatability and intermediate measurement precision using {sup 137}Cs, {sup 57}Co and {sup 133}Ba sources.Furthermore, this equipment was used to confirm the check sources activities produced at IPEN and used by the laboratory that produces the radiopharmaceuticals sent to the nuclear medicine services. The results showed a good behavior of this equipment. The maximum variation obtained in the accuracy test was of 1.81% for the {sup 57}Co source. For {sup 137}Cs this variation was of 4.59%, and for {sup 133}Ba, 11.83%. The high value obtained for the last case, indicates the needs of a correction that can be obtained by calibration methods. The result obtained using different reference sources showed a great repeatability with maximum variation of 1.38%. (author)

  15. A New Calibration Methodology for Thorax and Upper Limbs Motion Capture in Children Using Magneto and Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Luca Ricci

    2014-01-01

    Full Text Available Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU, that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children’s motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors’ frames of reference into useful kinematic information in the human limbs’ frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs. We will also present a novel cost function for the Levenberg–Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  16. Methodology of calibration for nucleonic multiphase meter technology for SAGD extra heavy oil

    Energy Technology Data Exchange (ETDEWEB)

    Pinguet, B.; Pechard, P.; Guerra, E. [Schlumberger Canada Ltd., Calgary, AB (Canada); Arendo, V.; Shaffer, M.; Contreras, J. [Total, Paris (France)

    2008-10-15

    The challenges facing bitumen metering in steam assisted gravity drainage operations were discussed with reference to high operating temperatures, steam pressure in the gas phase, foaming, emulsion and small density differences between bitumen and produced water. A metering tool that can deal with these operating constraints was presented. The multiphase meter (MFM) uses a multi-energy gamma ray (nuclear fraction) meter together with a venturi tube to provide accurate monitoring and optimization of oil, water, gas and steam production. This paper presented the specific strengths of the MFM with emphasis on its ability to correctly meter the liquid/gas phases depending on the calibration method and operating measurement range. The paper presented a study of the main parameters which could influence the measurement associated with this technology. The study was based on practical and simulated data and evaluated the impact of changes in each parameter. The purpose of the paper was to improve the understanding of this technology and how to apply it to bitumen metering and provide a guideline of the technology for future users in the oil industry. It described the combination venturi-nucleonic measurement parameters, such as mass flow rate; fraction meter; solution triangle of the fraction meter; primary and secondary output; fluid properties information; and uncertainty associated to any technology. A sensitivity analysis study to identify the dependency to some key fluid parameters was also described. It was concluded that MFM can be used in a stand-alone configuration. 7 refs., 2 tabs., 22 figs.

  17. Egs Exploration Methodology Development Using the Dixie Valley Geothermal Wellfield as a Calibration Site, a Progress Report

    Science.gov (United States)

    Iovenitti, J. L.; Blackwell, D. D.; Sainsbury, J.; Tibuleac, I. M.; Waibel, A.; Cladouhos, T. T.; Karlin, R. E.; Kennedy, B. M.; Isaaks, E.; Wannamaker, P. E.; Clyne, M.; Callahan, O.

    2011-12-01

    An Engineered Geothermal System (EGS) exploration methodology is being developed using the Dixie Valley geothermal system in Nevada as a field laboratory. This area was chosen as the test site because its has an extensive public domain database and deep geothermal wells allowing for calibration of the developed methodology. The calibration effort is focused on the Dixie Valley Geothermal Wellfield (DVGW), an area with 30 geothermal wells. Calibration will be based on cross-correlation of qualitative and quantitative results with known well conditions. This project is structured in the following manner (Task 1) review and assess existing public domain and other available data (baseline data); (Task 2) develop and populate a GIS-database; (Task 3) develop a baseline (existing public domain data) geothermal conceptual model, evaluate the geostatistical relationships between the various data sets, and generate a Baseline EGS favorability map from the surface to a 5-km depth focused on identifying EGS drilling targets; (Task 4) collect new gravity, seismic, magneto-tellurics (MT), geologic, and geochemical data to fill in data gaps and improve model resolution; and (Task 5) update the GIS-database for the newly acquired data and repeating the elements of Task 3 incorporating the baseline and new data to generate an Enhanced EGS Favorability Map. Innovative aspects of this project include: (1) developing interdisciplinary method(s) for synthesizing, integrating, and evaluating geoscience data both qualitatively and quantitatively; (2) demonstrating new seismic techniques based on ambient noise which is a passive survey not requiring local earthquakes and is a relatively inexpensive method to image seismic velocity, attenuation, and density; (3) determining if seismic data can infer temperature and lithology at depth; (4) extending 2D MT modeling/mapping to 3D MT; (5) generating a MT derived temperature map; and (6) jointly analyzing gravity, magnetic, seismic, and MT

  18. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    Science.gov (United States)

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process.

  19. Meteorological Sensor Array (MSA)-Phase I. Volume 3 (Pre-Field Campaign Sensor Calibration)

    Science.gov (United States)

    2015-07-01

    0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302 Respondents should be aware that notwithstanding any other provision of law ...calibration exercises were conducted. The first exercise examined the MSA-Phase I dynamic sensors (ultrasonic anemometers); the second assessed the MSA...Phase I thermodynamic sensors (barometers, thermometers, hygrometers, and pyranometers). This report documents the results of a detailed calibration

  20. Indication of BOLD-specific venous flow-volume changes from precisely controlled hyperoxic vs. hypercapnic calibration.

    Science.gov (United States)

    Mark, Clarisse I; Pike, G Bruce

    2012-04-01

    Deriving cerebral metabolic rate of oxygen consumption (CMRO(2)) from blood oxygenation level-dependent (BOLD) signals involves a flow-volume parameter (α), reflecting total cerebral blood volume changes, and a calibration constant (M). Traditionally, the former is assumed a fixed value and the latter is measured under alterations in fixed inspired fractional concentrations of carbon dioxide. We recently reported on reductions in M-variability via precise control of end-tidal pressures of both hypercapnic (HC) and hyperoxic (HO) gases. In light of these findings, our aim was to apply the improved calibration alternatives to neuronal activation, making use of their distinct vasoactive natures to evaluate the α-value. Nine healthy volunteers were imaged at 3 T while simultaneously measuring BOLD and arterial spin-labeling signals during controlled, graded, HC, and HO, followed by visual (VC) and sensorimotor cortices (SMC) activation. On the basis of low M- and CMRO(2)-variability, the comparison of these calibration alternatives accurately highlighted a reduced venous flow-volume relationship (α=0.16±0.02, with α(VC)=0.12±0.04, and α(SMC)=0.20±0.02), as appropriate for BOLD modeling.

  1. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    Directory of Open Access Journals (Sweden)

    L. Renbaum-Wolff

    2012-10-01

    Full Text Available Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions. The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η ranging between 10−3 and 103 Pascal seconds (Pa s in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  2. Development of a calibration methodology and tests of kerma area product meters; Desenvolvimento de uma metodologia de calibracao e testes de medidores de produto Kerma-Area

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Nathalia Almeida

    2013-07-01

    The quantity kerma area product (PKA) is important to establish reference levels in diagnostic radiology exams. This quantity can be obtained using a PKA meter. The use of such meters is essential to evaluate the radiation dose in radiological procedures and is a good indicator to make sure that the dose limit to the patient's skin doesn't exceed. Sometimes, these meters come fixed to X radiation equipment, which makes its calibration difficult. In this work, it was developed a methodology for calibration of PKA meters. The instrument used for this purpose was the Patient Dose Calibrator (PDC). It was developed to be used as a reference to check the calibration of PKA and air kerma meters that are used for dosimetry in patients and to verify the consistency and behavior of systems of automatic exposure control. Because it is a new equipment, which, in Brazil, is not yet used as reference equipment for calibration, it was also performed the quality control of this equipment with characterization tests, the calibration and an evaluation of the energy dependence. After the tests, it was proved that the PDC can be used as a reference instrument and that the calibration must be performed in situ, so that the characteristics of each X-ray equipment, where the PKA meters are used, are considered. The calibration was then performed with portable PKA meters and in an interventional radiology equipment that has a PKA meter fixed. The results were good and it was proved the need for calibration of these meters and the importance of in situ calibration with a reference meter. (author)

  3. User's manual for MODCAL: Bounding surface soil plasticity model calibration and prediction code, volume 2

    Science.gov (United States)

    Dennatale, J. S.; Herrmann, L. R.; Defalias, Y. F.

    1983-02-01

    In order to reduce the complexity of the model calibration process, a computer-aided automated procedure has been developed and tested. The computer code employs a Quasi-Newton optimization strategy to locate that set of parameter values which minimizes the discrepancy between the model predictions and the experimental observations included in the calibration data base. Through application to a number of real soils, the automated procedure has been found to be an efficient, reliable and economical means of accomplishing model calibration. Although the code was developed specifically for use with the Bounding Surface plasticity model, it can readily be adapted to other constitutive formulations. Since the code greatly reduces the dependence of calibration success on user expertise, it significantly increases the accessibility and usefulness of sophisticated material models to the general engineering community.

  4. 容积法体积管自动检定装置%Volumetric method volume tube automatic calibration device

    Institute of Scientific and Technical Information of China (English)

    周兵

    2014-01-01

    Using standard metal gauge, four-way commutator, PC, PLC as the main hardware, Kingview software development platform, The building Volumetric method volume tube automatic calibration device. According to the verification regulation requirements, To control the flow valve, Device automatically read standard gauge of metal volume value, The automatic temperature and pressure values collected, Automatically calculate the volume tube basic volume, standard error, repeatability, accuracy and repeatability. Realize the visual operation interface, save the test data, print calibration certificate, the notice of verification results, etc.%利用标准金属量器、四通换向器、计算机、PLC 为主要硬件,组态王软件为开发平台,构建容积法体积管自动检定装置。根据检定规程要求,控制流量阀,装置自动读取标准金属量器的容积值、自动采集温度、压力值,自动计算出体积管基本体积、标准误差、重复性、准确度、复现性。实现可视的操作界面、保存检定数据、打印检定证书、检定结果通知书等功能。

  5. A self-calibrating telemetry system for measurement of ventricular pressure-volume relations in conscious, freely moving rats.

    Science.gov (United States)

    Uemura, Kazunori; Kawada, Toru; Sugimachi, Masaru; Zheng, Can; Kashihara, Koji; Sato, Takayuki; Sunagawa, Kenji

    2004-12-01

    Using Bluetooth wireless technology, we developed an implantable telemetry system for measurement of the left ventricular pressure-volume relation in conscious, freely moving rats. The telemetry system consisted of a pressure-conductance catheter (1.8-Fr) connected to a small (14-g) fully implantable signal transmitter. To make the system fully telemetric, calibrations such as blood resistivity and parallel conductance were also conducted telemetrically. To estimate blood resistivity, we used four electrodes arranged 0.2 mm apart on the pressure-conductance catheter. To estimate parallel conductance, we used a dual-frequency method. We examined the accuracy of calibrations, stroke volume (SV) measurements, and the reproducibility of the telemetry. The blood resistivity estimated telemetrically agreed with that measured using an ex vivo cuvette method (y=1.09x - 11.9, r2= 0.88, n=10). Parallel conductance estimated by the dual-frequency (2 and 20 kHz) method correlated well with that measured by a conventional saline injection method (y=1.59x - 1.77, r2= 0.87, n=13). The telemetric SV closely correlated with the flowmetric SV during inferior vena cava occlusions (y=0.96x + 7.5, r2=0.96, n=4). In six conscious rats, differences between the repeated telemetries on different days (3 days apart on average) were reasonably small: 13% for end-diastolic volume, 20% for end-systolic volume, 28% for end-diastolic pressure, and 6% for end-systolic pressure. We conclude that the developed telemetry system enables us to estimate the pressure-volume relation with reasonable accuracy and reproducibility in conscious, untethered rats.

  6. Seismic hazard methodology for the Central and Eastern United States: Volume 1: Part 2, Methodology (Revision 1): Final report

    Energy Technology Data Exchange (ETDEWEB)

    McGuire, R.K.; Veneziano, D.; Van Dyck, J.; Toro, G.; O' Hara, T.; Drake, L.; Patwardhan, A.; Kulkarni, R.; Keeney, R.; Winkler, R.

    1988-11-01

    Aided by its consultant, the US Geologic Survey (USGS), the Nuclear Regulatory Commission (NRC) reviewed ''Seismic Hazard Methodology for the Central and Eastern United States.'' This topical report was submitted jointly by the Seismicity Owners Group (SOG) and the Electric Power Research Institute (EPRI) in July 1986 and was revised in February 1987. The NRC staff concludes that SOG/EPRI Seismic Hazard Methodology as documented in the topical report and associated submittals, is an acceptable methodology for use in calculating seismic hazard in the Central and Eastern United States (CEUS). These calculations will be based upon the data and information documented in the material that was submitted as the SOG/EPRI topical report and ancillary submittals. However, as part of the review process the staff conditions its approval by noting areas in which problems may arise unless precautions detailed in the report are observed. 23 refs.

  7. Financial constraints in capacity planning: a national utility regulatory model (NUREG). Volume I of III: methodology. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1981-10-29

    This report develops and demonstrates the methodology for the National Utility Regulatory (NUREG) Model developed under contract number DEAC-01-79EI-10579. It is accompanied by two supporting volumes. Volume II is a user's guide for operation of the NUREG software. This includes description of the flow of software and data, as well as the formats of all user data files. Finally, Volume III is a software description guide. It briefly describes, and gives a listing of, each program used in NUREG.

  8. On the Vicarious Calibration Methodologies in DIMITRI: Application on Sentinel-2 and Landsat-8 Products and Comparison with In-Situ Measurements

    Science.gov (United States)

    Alhammoud, Bahjat; Bouvet, Marc; Jackson, Jan; Arias, Manuel; Thepaut, Olivier; Lafrance, Bruno; Gascon, Ferran; Cadau, Enrico; Berthelot, Beatrice; Francesconi, Benjamin

    2016-08-01

    In the frame of the Sentinel-2 Mission Performance Centre (MPC) activities, in order to assess the S2A/MSI data quality and to monitor its evolution, DIMITRI is used to perform the vicarious validation of the Level-1C products. DIMITRI consists on several vicarious calibration methodologies for EO optical sensors: Rayleigh scattering, Sun-Glint, PICS and sensor-to- sensor inter-calibration.The first results of S2A/MSI from both Rayleigh and PICS methodologies are consistent and show an excellent quality of the L1C products. The cross- mission Intercomparison with LANDSAT-8/OLI over PICS shows good agreement within the ±5% mission requirements. The Intercomparison with concomitant ground-based TOA-reflectance over the Railroad Valley site shows a good agreement with a relative difference of 5%-10%. The uncertainties over the estimated calibration coefficients overall the results are found to be less than 5% for most of the S2A/MSI spectral bands.

  9. Bymixer provides on-line calibration of measurement of CO2 volume exhaled per breath.

    Science.gov (United States)

    Breen, P H; Serina, E R

    1997-01-01

    The measurement of CO2 volume exhaled per breath (VCO2.br) can be determined during anesthesia by the multiplication and integration of tidal flow (V) and PCO2. During side-stream capnometry, PCO2 must be advanced in time by transport delay (TD), the time to suction gas through the sampling tube. During ventilation, TD can vary due to sample line connection internal volume or flow rate changes. To determine correct TD and measure accurate VCO2.br during actual ventilation. TD can be iteratively adjusted (TDADJ) until VCO2-br/tidal volume equals PCO2 measured in a mixed expired gas collection (PECO2) (J Appl. Physiol. 72:2029-2035, 1992). However. PECO2 is difficult to measure during anesthesia because CO2 is absorbed in the circle circuit. Accordingly, we implemented a bypass flow-mixing chamber device (bymixer) that was interposed in the expiration limb of the circle circuit and accurately measured PECO2 over a wide range of conditions of ventilation of a test lung-metabolic chamber (regression slope = 1.01: R2 = 0.99). The bymixer response (time constant) varied from 18.1 +/- 0.03 sec (12.5 l/min ventilation) to 66.7 +/- 0.9 sec (2.5 l/min). Bymixer PECO2 was used to correctly determine TDADJ (without interrupting respiration) to enable accurate measurement of VCO2.br over widely changing expiratory flow patterns.

  10. Methodology to include a correction for offset in the calibration of a Diode-based 2D verification device; Metodologia para incluir una correccion por offset en la calibracion de un dispositivo de verificacion 2D basado en diodos

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez Ros, J. C.; Pamos Urena, M.; Jerez Sainz, M.; Lobato Munoz, M.; Jodar Lopez, C. A.; Ruiz Lopez, M. a.; Carrasco Rodriguez, J. L.

    2013-07-01

    We propose a methodology to correct doses of device verification 2D MapChek2 planes by offset. This methodology provides an array of correction by Offset applied to the calibration per dose due to the Offset of the diode Central as well as the correction of the Offset of each diode on each acquisition. (Author)

  11. Biological dosimetry of ionizing radiation: Evaluation of the dose with cytogenetic methodologies by the construction of calibration curves

    Science.gov (United States)

    Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia

    2016-09-01

    In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining

  12. A bronchoscopic navigation system using bronchoscope center calibration for accurate registration of electromagnetic tracker and CT volume without markers

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xiongbiao, E-mail: xiongbiao.luo@gmail.com [Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada)

    2014-06-15

    Purpose: Various bronchoscopic navigation systems are developed for diagnosis, staging, and treatment of lung and bronchus cancers. To construct electromagnetically navigated bronchoscopy systems, registration of preoperative images and an electromagnetic tracker must be performed. This paper proposes a new marker-free registration method, which uses the centerlines of the bronchial tree and the center of a bronchoscope tip where an electromagnetic sensor is attached, to align preoperative images and electromagnetic tracker systems. Methods: The chest computed tomography (CT) volume (preoperative images) was segmented to extract the bronchial centerlines. An electromagnetic sensor was fixed at the bronchoscope tip surface. A model was designed and printed using a 3D printer to calibrate the relationship between the fixed sensor and the bronchoscope tip center. For each sensor measurement that includes sensor position and orientation information, its corresponding bronchoscope tip center position was calculated. By minimizing the distance between each bronchoscope tip center position and the bronchial centerlines, the spatial alignment of the electromagnetic tracker system and the CT volume was determined. After obtaining the spatial alignment, an electromagnetic navigation bronchoscopy system was established to real-timely track or locate a bronchoscope inside the bronchial tree during bronchoscopic examinations. Results: The electromagnetic navigation bronchoscopy system was validated on a dynamic bronchial phantom that can simulate respiratory motion with a breath rate range of 0–10 min{sup −1}. The fiducial and target registration errors of this navigation system were evaluated. The average fiducial registration error was reduced from 8.7 to 6.6 mm. The average target registration error, which indicates all tracked or navigated bronchoscope position accuracy, was much reduced from 6.8 to 4.5 mm compared to previous registration methods. Conclusions: An

  13. Estimation of block conductivities from hydrologically calibrated fracture networks. Description of methodology and application to Romuvaara investigation area

    Energy Technology Data Exchange (ETDEWEB)

    Niemi, A. [Royal Institute of Technology, Stockholm (Sweden); Kontio, K.; Kuusela-Lahtinen, A.; Vaittinen, T. [VTT Communities and Infrastructure, Espoo (Finland)

    1999-03-01

    This study looks at heterogeneity in hydraulic conductivity at Romuvaara site. It concentrates on the average rock outside the deterministic fracture zones, especially in the deeper parts of the bedrock. A large number of stochastic fracture networks is generated based on geometrical data on fracture geometry from the site. The hydraulic properties of the fractures are determined by calibrating the networks against well test data. The calibration is done by starting from an initial estimate for fracture transmissivity distribution based on 2 m interval flow meter data, simulating the 10 m constant head injection test behaviour in a number of fracture network realisations and comparing the simulated well tests statistics to the measured ones. A large number of possible combinations of mean and standard deviation of fracture transmissivities are tested and the goodness-of-fit between the measured and simulated results determined by means of the bootstrapping method. As the result, a range of acceptable fracture transmissivity distribution parameters is obtained. In the accepted range, the mean of log transmissivity varies between -13.9 and -15.3 and standard deviation between 4.0 and 3.2, with increase in standard deviation compensating for decrease in mean. The effect of spatial autocorrelation was not simulated. The variogram analysis did, however, give indications that an autocorrelation range of the order of 10 m might be realistic for the present data. Based on the calibrated fracture networks, equivalent continuum conductivities of the calibrated 30 m x 30 m x 30 m conductivity blocks were determined. For each realisation, three sets of simulations was carried out with the main gradient in x, y and z directions, respectively. Based on these results the components of conductivity tensor were determined. Such data can be used e.g. for stochastic continuum type Monte Carlo simulations with larger scale models. The hydraulic conductivities in the direction of the

  14. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  15. Method for Determining Language Objectives and Criteria. Volume II. Methodological Tools: Computer Analysis, Data Collection Instruments.

    Science.gov (United States)

    1979-05-25

    This volume presents (1) Methods for computer and hand analysis of numerical language performance data (includes examples) (2) samples of interview, observation, and survey instruments used in collecting language data. (Author)

  16. Body composition in Nepalese children using isotope dilution: the production of ethnic-specific calibration equations and an exploration of methodological issues

    Directory of Open Access Journals (Sweden)

    Delan Devakumar

    2015-03-01

    Full Text Available Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA is a simple and accurate method for estimating body composition, but requires population-specific calibration equations.Objectives. (1 To generate population specific calibration equations to predict lean mass (LM from BIA in Nepalese children aged 7–9 years. (2 To explore methodological changes that may extend the range and improve accuracy.Methods. BIA measurements were obtained from 102 Nepalese children (52 girls using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced.Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R2 93%. The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42. Reducing the number of children increased the error at the tails of the weight distribution.Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.

  17. Methodology and calibration for continuous measurements of biogeochemical trace gas and O2 concentrations from a 300-m tall tower in central Siberia

    Directory of Open Access Journals (Sweden)

    E. A. Kozlova

    2009-05-01

    Full Text Available We present an integrated system for measuring atmospheric concentrations of CO2, O2, CH4, CO, and N2O in central Siberia. Our project aims to demonstrate the feasibility of establishing long-term, continuous, high precision atmospheric measurements to elucidate greenhouse gas processes from a very remote, mid-continental boreal environment. Air is sampled from five heights on a custom-built 300-m tower. Common features to all species' measurements include air intakes, an air drying system, flushing procedures, and data processing methods. Calibration standards are shared among all five measured species by extending and optimising a proven methodology for long-term O2 calibration. Our system achieves the precision and accuracy requirements specified by the European Union's "CarboEurope" and "ICOS" (Integrated Carbon Observing System programmes in the case of CO2, O2, and CH4, while CO and N2O require some further improvements. It was found that it is not possible to achieve these high precision measurements without skilled technical assistance on-site, primarily because of 2–3 month delays in access to data and diagnostic information. We present results on the stability of reference standards in high pressure cylinders. It was also found that some previous methods do not mitigate fractionation of O2 in a sample airstream to a satisfactory level.

  18. The space station assembly phase: Flight telerobotic servicer feasibility. Volume 2: Methodology and case study

    Science.gov (United States)

    Smith, Jeffrey H.; Gyamfi, Max A.; Volkmer, Kent; Zimmerman, Wayne F.

    1987-01-01

    A methodology is described for examining the feasibility of a Flight Telerobotic Servicer (FTS) using two assembly scenarios, defined at the EVA task level, for the 30 shuttle flights (beginning with MB-1) over a four-year period. Performing all EVA tasks by crew only is compared to a scenario in which crew EVA is augmented by FTS. A reference FTS concept is used as a technology baseline and life-cycle cost analysis is performed to highlight cost tradeoffs. The methodology, procedure, and data used to complete the analysis are documented in detail.

  19. Mechanistic Methodology for Airport Pavement Design with Engineering Fabrics. Volume 1. Theoretical and Experimental Bases.

    Science.gov (United States)

    1984-08-01

    DOTIFAAIPM-8419,, Mechanistic Methodology for Program Engineering& Airport Pavement Design with Maintenance Service Engineerin Washington, D.C. 20591...Reflective cracks require labor intensive operations for crack sealing and patching, thus becoming a significant maintenance expense item. The problem of...models or prediciting allowable critical strains are not available. The problems are complicated further by the fact that since asphaltic concrete is a

  20. Life prediction methodology for ceramic components of advanced vehicular heat engines: Volume 1. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Khandelwal, P.K.; Provenzano, N.J.; Schneider, W.E. [Allison Engine Co., Indianapolis, IN (United States)

    1996-02-01

    One of the major challenges involved in the use of ceramic materials is ensuring adequate strength and durability. This activity has developed methodology which can be used during the design phase to predict the structural behavior of ceramic components. The effort involved the characterization of injection molded and hot isostatic pressed (HIPed) PY-6 silicon nitride, the development of nondestructive evaluation (NDE) technology, and the development of analytical life prediction methodology. Four failure modes are addressed: fast fracture, slow crack growth, creep, and oxidation. The techniques deal with failures initiating at the surface as well as internal to the component. The life prediction methodology for fast fracture and slow crack growth have been verified using a variety of confirmatory tests. The verification tests were conducted at room and elevated temperatures up to a maximum of 1371 {degrees}C. The tests involved (1) flat circular disks subjected to bending stresses and (2) high speed rotating spin disks. Reasonable correlation was achieved for a variety of test conditions and failure mechanisms. The predictions associated with surface failures proved to be optimistic, requiring re-evaluation of the components` initial fast fracture strengths. Correlation was achieved for the spin disks which failed in fast fracture from internal flaws. Time dependent elevated temperature slow crack growth spin disk failures were also successfully predicted.

  1. Metabolic tumour volumes measured at staging in lymphoma: methodological evaluation on phantom experiments and patients

    Energy Technology Data Exchange (ETDEWEB)

    Meignan, Michel [Hopital Henri Mondor and Paris-Est University, Department of Nuclear Medicine, Creteil (France); Paris-Est University, Service de Medecine Nucleaire, EAC CNRS 7054, Hopital Henri Mondor AP-HP, Creteil (France); Sasanelli, Myriam; Itti, Emmanuel [Hopital Henri Mondor and Paris-Est University, Department of Nuclear Medicine, Creteil (France); Casasnovas, Rene Olivier [CHU Le Bocage, Department of Hematology, Dijon (France); Luminari, Stefano [University of Modena and Reggio Emilia, Department of Diagnostic, Clinic and Public Health Medicine, Modena (Italy); Fioroni, Federica [Santa Maria Nuova Hospital-IRCCS, Department of Medical Physics, Reggio Emilia (Italy); Coriani, Chiara [Santa Maria Nuova Hospital-IRCCS, Department of Radiology, Reggio Emilia (Italy); Masset, Helene [Henri Mondor Hospital, Department of Radiophysics, Creteil (France); Gobbi, Paolo G. [University of Pavia, Department of Internal Medicine and Gastroenterology, Fondazione IRCCS Policlinico San Matteo, Pavia (Italy); Merli, Francesco [Santa Maria Nuova Hospital-IRCCS, Department of Hematology, Reggio Emilia (Italy); Versari, Annibale [Santa Maria Nuova Hospital-IRCCS, Department of Nuclear Medicine, Reggio Emilia (Italy)

    2014-06-15

    The presence of a bulky tumour at staging on CT is an independent prognostic factor in malignant lymphomas. However, its prognostic value is limited in diffuse disease. Total metabolic tumour volume (TMTV) determined on {sup 18}F-FDG PET/CT could give a better evaluation of the total tumour burden and may help patient stratification. Different methods of TMTV measurement established in phantoms simulating lymphoma tumours were investigated and validated in 40 patients with Hodgkin lymphoma and diffuse large B-cell lymphoma. Data were processed by two nuclear medicine physicians in Reggio Emilia and Creteil. Nineteen phantoms filled with {sup 18}F-saline were scanned; these comprised spherical or irregular volumes from 0.5 to 650 cm{sup 3} with tumour-to-background ratios from 1.65 to 40. Volumes were measured with different SUVmax thresholds. In patients, TMTV was measured on PET at staging by two methods: volumes of individual lesions were measured using a fixed 41 % SUVmax threshold (TMTV{sub 41}) and a variable visually adjusted SUVmax threshold (TMTV{sub var}). In phantoms, the 41 % threshold gave the best concordance between measured and actual volumes. Interobserver agreement was almost perfect. In patients, the agreement between the reviewers for TMTV{sub 41} measurement was substantial (ρ {sub c} = 0.986, CI 0.97 - 0.99) and the difference between the means was not significant (212 ± 218 cm{sup 3} for Creteil vs. 206 ± 219 cm{sup 3} for Reggio Emilia, P = 0.65). By contrast the agreement was poor for TMTV{sub var}. There was a significant direct correlation between TMTV{sub 41} and normalized LDH (r = 0.652, CI 0.42 - 0.8, P <0.001). Higher disease stages and bulky tumour were associated with higher TMTV{sub 41}, but high TMTV{sub 41} could be found in patients with stage 1/2 or nonbulky tumour. Measurement of baseline TMTV in lymphoma using a fixed 41% SUVmax threshold is reproducible and correlates with the other parameters for tumour mass evaluation

  2. Transport of solid commodities via freight pipeline: demand analysis methodology. Volume IV. First year final report

    Energy Technology Data Exchange (ETDEWEB)

    Allen, W.B.; Plaut, T.

    1976-07-01

    In order to determine the feasibility of intercity freight pipelines, it was necessary to determine whether sufficient traffic flows currently exist between various origins and destinations to justify consideration of a mode whose operating characteristics became competitive under conditions of high-traffic volume. An intercity origin/destination freight-flow matrix was developed for a large range of commodities from published sources. A high-freight traffic-density corridor between Chicago and New York and another between St. Louis and New York were studied. These corridors, which represented 18 cities, had single-direction flows of 16 million tons/year. If trans-shipment were allowed at each of the 18 cities, flows of up to 38 million tons/year were found in each direction. These figures did not include mineral or agricultural products. After determining that such pipeline-eligible freight-traffic volumes existed, the next step was to determine the ability of freight pipeline to penetrate such markets. Modal-split models were run on aggregate data from the 1967 Census of Transportation. Modal-split models were also run on disaggregate data specially collected for this study. The freight pipeline service characteristics were then substituted into both the aggregate and disaggregate models (truck vs. pipeline and then rail vs. pipeline) and estimates of pipeline penetration into particular STCC commodity groups were made. Based on these very preliminary results, it appears that freight pipeline has market penetration potential that is consistent with high-volume participation in the intercity freight market.

  3. Seismic hazard analysis application of methodology, results, and sensitivity studies. Volume 4

    Energy Technology Data Exchange (ETDEWEB)

    Bernreuter, D. L

    1981-08-08

    As part of the Site Specific Spectra Project, this report seeks to identify the sources of and minimize uncertainty in estimates of seismic hazards in the Eastern United States. Findings are being used by the Nuclear Regulatory Commission to develop a synthesis among various methods that can be used in evaluating seismic hazard at the various plants in the Eastern United States. In this volume, one of a five-volume series, we discuss the application of the probabilistic approach using expert opinion. The seismic hazard is developed at nine sites in the Central and Northeastern United States, and both individual experts' and synthesis results are obtained. We also discuss and evaluate the ground motion models used to develop the seismic hazard at the various sites, analyzing extensive sensitivity studies to determine the important parameters and the significance of uncertainty in them. Comparisons are made between probabilistic and real spectral for a number of Eastern earthquakes. The uncertainty in the real spectra is examined as a function of the key earthquake source parameters. In our opinion, the single most important conclusion of this study is that the use of expert opinion to supplement the sparse data available on Eastern United States earthquakes is a viable approach for determining estimted seismic hazard in this region of the country. 29 refs., 15 tabs.

  4. Estimates of emergency operating capacity in US manufacturing and nonmanufacturing industries - Volume 1: Concepts and Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Belzer, D.B. (Pacific Northwest Lab., Richland, WA (USA)); Serot, D.E. (D/E/S Research, Richland, WA (USA)); Kellogg, M.A. (ERCE, Inc., Portland, OR (USA))

    1991-03-01

    Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner to allow evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key nonmanufacturing sectors. This volume lays out the general concepts and methods used to develop the emergency operating estimates. The historical analysis of capacity extends from 1974 through 1986. Some nonmanufacturing industries are included. In addition to mining and utilities, key industries in transportation, communication, and services were analyzed. Physical capacity and efficiency of production were measured. 3 refs., 2 figs., 12 tabs. (JF)

  5. 'Dip-sticks' calibration handles self-attenuation and coincidence effects in large-volume gamma-ray spectrometry

    CERN Document Server

    Wolterbeek, H T

    2000-01-01

    Routine gamma-spectrometric analyses of samples with low-level activities (e.g. food, water, environmental and industrial samples) are often performed in large samples, placed close to the detector. In these geometries, detection sensitivity is improved but large errors are introduced due to self-attenuation and coincidence summing. Current approaches to these problems comprise computational methods and spiked standard materials. However, the first are often regarded as too complex for practical routine use, the latter never fully match real samples. In the present study, we introduce a dip-sticks calibration as a fast and easy practical solution to this quantification problem in a routine analytical setting. In the proposed set-up, calibrations are performed within the sample itself, thus making it a broadly accessible matching-reference approach, which is principally usable for all sample matrices.

  6. Methodology for Using 3-Dimensional Sonography to Measure Fetal Adrenal Gland Volumes in Pregnant Women With and Without Early Life Stress.

    Science.gov (United States)

    Kim, Deborah; Epperson, C Neill; Ewing, Grace; Appleby, Dina; Sammel, Mary D; Wang, Eileen

    2016-09-01

    Fetal adrenal gland volumes on 3-dimensional sonography have been studied as potential predictors of preterm birth. However, no consistent methodology has been published. This article describes the methodology used in a study that is evaluating the effects of maternal early life stress on fetal adrenal growth to allow other researchers to compare methodologies across studies. Fetal volumetric data were obtained in 36 women at 20 to 22 and 28 to 30 weeks' gestation. Two independent examiners measured multiple images of a single fetal adrenal gland from each sonogram. Intra- and inter-rater consistency was examined. In addition, fetal adrenal volumes between male and female fetuses were reported. The intra- and inter-rater reliability was satisfactory when the mean of 3 measurements from each rater was used. At 20 weeks' gestation, male fetuses had larger average adjusted adrenal volumes than female fetuses (mean, 0.897 versus 0.638; P = .004). At 28 weeks' gestation, the fetal weight was more influential in determining values for adjusted fetal adrenal volume (0.672 for male fetuses versus 0.526 for female fetuses; P = .034). This article presents a methodology for assessing fetal adrenal volume using 3-dimensional sonography that can be used by other researchers to provide more consistency across studies.

  7. User’s Manual for MODCAL - Bounding Surface Soil Plasticity Model Calibration and Prediction Code. Volume II.

    Science.gov (United States)

    1983-02-01

    RSDL , JCOD) C C THIS SUBROUTINE PERFORMS A SINGLE-ELEMENT, C INCREMENTAL...16X,’N’,5X,’EPS-1’,9X,’Q’,9X,’P’,91,’U’, -** 5X,’E-VOL’/16X,’-’,5(5X ...-..1) 920 FORMAT(12X,15,5F10.2) C C INDIVIDUAL TEST LOOP C RSDL =O. 0 WTST=O...615 CONTINUE C C COMPUTE RESIDUALS IF MODEL CALIBRATION IS REQUIRED C ITSTzL 1 * IF(JCOD.EQ.1) Go TO 620 CALL RSDUAL(NJITST, RTST) RSDL

  8. Syringe shape and positioning relative to efficiency volume inside dose calibrators and its role in nuclear medicine quality assurance programs

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.A.M. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)], E-mail: a.miranda@portugalmail.pt; Carrasco, M.F. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Lencart, J. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Bastos, A.L. [Servico de Medicina Nuclear, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)

    2009-06-15

    A careful analysis of geometry and source positioning influence in the activity measurement outcome of a nuclear medicine dose calibrator is presented for {sup 99m}Tc. The implementation of a quasi-point source apparent activity curve measurement is proposed for an accurate correction of the activity inside several syringes, and compared with a theoretical geometric efficiency model. Additionally, new geometrical parameters are proposed to test and verify the correct positioning of the syringes as part of acceptance testing and quality control procedures.

  9. Cuadernos de Autoformacion en Participacion Social: Metodologia. Volumen 2. Primera Edicion (Self-Instructional Notebooks on Social Participation: Methodology. Volume 2. First Edition).

    Science.gov (United States)

    Instituto Nacional para la Educacion de los Adultos, Mexico City (Mexico).

    The series "Self-instructional Notes on Social Participation" is a six-volume series intended as teaching aids for adult educators. The theoretical, methodological, informative and practical elements of this series will assist professionals in their work and help them achieve greater success. The specific purpose of each notebook is…

  10. Methodologies for Assessing the Cumulative Environmental Effects of Hydroelectric Development of Fish and Wildlife in the Columbia River Basin, Volume 1, Recommendations, 1987 Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Stull, Elizabeth Ann

    1987-07-01

    This volume is the first of a two-part set addressing methods for assessing the cumulative effects of hydropower development on fish and wildlife in the Columbia River Basin. Species and habitats potentially affected by cumulative impacts are identified for the basin, and the most significant effects of hydropower development are presented. Then, current methods for measuring and assessing single-project effects are reviewed, followed by a review of methodologies with potential for use in assessing the cumulative effects associated with multiple projects. Finally, two new approaches for cumulative effects assessment are discussed in detail. Overall, this report identifies and reviews the concepts, factors, and methods necessary for understanding and conducting a cumulative effects assessment in the Columbia River Basin. Volume 2 will present a detailed procedural handbook for performing a cumulative assessment using the integrated tabular methodology introduced in this volume. 308 refs., 18 figs., 10 tabs.

  11. Final report on BIPM/CIPM key comparison CCM.FF-K4.2.2011: Volume comparison at 100 µL—Calibration of micropipettes (piston pipettes)

    Science.gov (United States)

    Batista, Elsa; Arias, Roberto; Jintao, Wang

    2013-01-01

    Five fixed micropipettes of 100 µl were tested by eight different National Metrology Institutes from different Regional Metrology Organizations between July 2011 and June 2012. The micropipettes had a stable volume, during the whole comparison, with a maximum standard deviation of 0.06 µl. After a careful analysis of the original results it was decided to make corrections to the standard atmospheric pressure in order to compare results under the same calibration conditions. These corrections led to a decrease of variability within the laboratories. It was also decided to include the stability of the standard and the method variability, determined by the pilot laboratory, in the uncertainty budget of each laboratory. The corrected results (volume and uncertainty) are consistent and overlap with the key comparison reference values for the micropipettes 354828Z, 354853Z and 354864Z. For the other two micropipettes only one result is not consistent with the reference value. Most results also overlap with those of the other laboratories di,j pressure condition and temperature (for example 101.325 kPa and 20 ºC) and this information should be stated in the calibration certificate of the micropipette. Also the standard uncertainty of the method variability and reproducibility values should always be included in the uncertainty budget (around 0.1%) to lead to more realistic uncertainty values. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  12. Final report on the EURAMET.M.FF-K4.2.2014 volume comparison at 100 μL—calibration of micropipettes

    Science.gov (United States)

    Batista, Elsa; Matus, Michael; Metaxiotou, Zoe; Tudor, Maria; Lenard, Elzbieta; Buker, Oliver; Wennergren, Per; Piluri, Erinda; Miteva, Mariana; Vicarova, Martina; Vospĕlová, Alena; Turnsek, Urska; Micic, Ljiljana; Grue, Lise-Lote; Mihailovic, Mirjana; Sarevska, Anastazija

    2017-01-01

    During the EURAMET TC-F meeting of 2014 and following the finalization of CCM.FF-K4.2.2011 comparison, it was agreed to start a Regional Key Comparison (KC) on volume measurements using two 100 μL micropipettes (piston pipettes) allowing the participating laboratories to assess the agreement of their results and uncertainties. Two 100 μL micropipettes were tested by 15 participants. One participant was not a member or associate member of the BIPM and was be removed from this report. The comparison started in July 2015 and ended in March 2016. The Volume and Flow Laboratory of the Portuguese Institute for Quality (IPQ) was the pilot laboratory and performed the initial and final measurements of the micropipettes. The micropipettes showed a stable volume during the whole comparison, which was confirmed by the results from the pilot laboratory. The original results of all participant NMIs were corrected to the standard atmospheric pressure in order to compare results under the same calibration conditions, and the contribution of the 'process-related handling contribution' was added to the uncertainty budget of each participant. In general the declared CMCs are in accordance with the KCDB. For the micropipette 354828Z, two laboratories had inconsistent results. For micropipette 354853Z, three laboratories had inconsistent results. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  13. Field calibration of cup anemometers

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten

    2007-01-01

    A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve...... the statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited...

  14. Distributed Radio Interferometric Calibration

    CERN Document Server

    Yatawatta, Sarod

    2015-01-01

    Increasing data volumes delivered by a new generation of radio interferometers require computationally efficient and robust calibration algorithms. In this paper, we propose distributed calibration as a way of improving both computational cost as well as robustness in calibration. We exploit the data parallelism across frequency that is inherent in radio astronomical observations that are recorded as multiple channels at different frequencies. Moreover, we also exploit the smoothness of the variation of calibration parameters across frequency. Data parallelism enables us to distribute the computing load across a network of compute agents. Smoothness in frequency enables us reformulate calibration as a consensus optimization problem. With this formulation, we enable flow of information between compute agents calibrating data at different frequencies, without actually passing the data, and thereby improving robustness. We present simulation results to show the feasibility as well as the advantages of distribute...

  15. Criteria for the development and use of the methodology for environmentally-acceptable fossil energy site evaluation and selection. Volume 2. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Eckstein, L.; Northrop, G.; Scott, R.

    1980-02-01

    This report serves as a companion document to the report, Volume 1: Environmentally-Acceptable Fossil Energy Site Evaluation and Selection: Methodology and Users Guide, in which a methodology was developed which allows the siting of fossil fuel conversion facilities in areas with the least environmental impact. The methodology, known as SELECS (Site Evaluation for Energy Conversion Systems) does not replace a site specific environmental assessment, or an environmental impact statement (EIS), but does enhance the value of an EIS by thinning down the number of options to a manageable level, by doing this in an objective, open and selective manner, and by providing preliminary assessment and procedures which can be utilized during the research and writing of the actual impact statement.

  16. Tourism Methodologies

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... in interview and field work situations, and how do we engage with the performative aspects of tourism as a field of study? The book acknowledges that research is also performance and that it constitutes an aspect of intervention in the situations and contexts it is trying to explore. This is an issue dealt...

  17. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 1. Theory and Methodology Based Upon Bootstrap Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Frey, H. Christopher [North Carolina State University, Raleigh, NC (United States); Rhodes, David S. [North Carolina State University, Raleigh, NC (United States)

    1999-04-30

    This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  18. Atlas based brain volumetry: How to distinguish regional volume changes due to biological or physiological effects from inherent noise of the methodology.

    Science.gov (United States)

    Opfer, Roland; Suppa, Per; Kepp, Timo; Spies, Lothar; Schippling, Sven; Huppertz, Hans-Jürgen

    2016-05-01

    Fully-automated regional brain volumetry based on structural magnetic resonance imaging (MRI) plays an important role in quantitative neuroimaging. In clinical trials as well as in clinical routine multiple MRIs of individual patients at different time points need to be assessed longitudinally. Measures of inter- and intrascanner variability are crucial to understand the intrinsic variability of the method and to distinguish volume changes due to biological or physiological effects from inherent noise of the methodology. To measure regional brain volumes an atlas based volumetry (ABV) approach was deployed using a highly elastic registration framework and an anatomical atlas in a well-defined template space. We assessed inter- and intrascanner variability of the method in 51 cognitively normal subjects and 27 Alzheimer dementia (AD) patients from the Alzheimer's Disease Neuroimaging Initiative by studying volumetric results of repeated scans for 17 compartments and brain regions. Median percentage volume differences of scan-rescans from the same scanner ranged from 0.24% (whole brain parenchyma in healthy subjects) to 1.73% (occipital lobe white matter in AD), with generally higher differences in AD patients as compared to normal subjects (e.g., 1.01% vs. 0.78% for the hippocampus). Minimum percentage volume differences detectable with an error probability of 5% were in the one-digit percentage range for almost all structures investigated, with most of them being below 5%. Intrascanner variability was independent of magnetic field strength. The median interscanner variability was up to ten times higher than the intrascanner variability.

  19. Variation in Brain Morphology of Intertidal Gobies: A Comparison of Methodologies Used to Quantitatively Assess Brain Volumes in Fish.

    Science.gov (United States)

    White, Gemma E; Brown, Culum

    2015-01-01

    When correlating brain size and structure with behavioural and environmental characteristics, a range of techniques can be utilised. This study used gobiid fishes to quantitatively compare brain volumes obtained via three different methods; these included the commonly used techniques of histology and approximating brain volume to an idealised ellipsoid, and the recently established technique of X-ray micro-computed tomography (micro-CT). It was found that all three methods differed significantly from one another in their volume estimates for most brain lobes. The ellipsoid method was prone to over- or under-estimation of lobe size, histology caused shrinkage in the telencephalon, and although micro-CT methods generated the most reliable results, they were also the most expensive. Despite these differences, all methods depicted quantitatively similar relationships among the four different species for each brain lobe. Thus, all methods support the same conclusions that fishes inhabiting rock pool and sandy habitats have different patterns of brain organisation. In particular, fishes from spatially complex rock pool habitats were found to have larger telencephalons, while those from simple homogenous sandy shores had a larger optic tectum. Where possible we recommend that micro-CT be used in brain volume analyses, as it allows for measurements without destruction of the brain and fast identification and quantification of individual brain lobes, and minimises many of the biases resulting from the histology and ellipsoid methods. © 2015 S. Karger AG, Basel.

  20. Level 2 processing for the imaging Fourier transform spectrometer GLORIA: derivation and validation of temperature and trace gas volume mixing ratios from calibrated dynamics mode spectra

    Directory of Open Access Journals (Sweden)

    J. Ungermann

    2015-06-01

    Full Text Available The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA is an airborne infrared limb imager combining a two-dimensional infrared detector with a Fourier transform spectrometer. It was operated aboard the new German Gulfstream G550 High Altitude LOng Range (HALO research aircraft during the Transport And Composition in the upper Troposphere/lowermost Stratosphere (TACTS and Earth System Model Validation (ESMVAL campaigns in summer 2012. This paper describes the retrieval of temperature and trace gas (H2O, O3, HNO3 volume mixing ratios from GLORIA dynamics mode spectra that are spectrally sampled every 0.625 cm−1. A total of 26 integrated spectral windows are employed in a joint fit to retrieve seven targets using consecutively a fast and an accurate tabulated radiative transfer model. Typical diagnostic quantities are provided including effects of uncertainties in the calibration and horizontal resolution along the line of sight. Simultaneous in situ observations by the Basic Halo Measurement and Sensor System (BAHAMAS, the Fast In-situ Stratospheric Hygrometer (FISH, an ozone detector named Fairo, and the Atmospheric chemical Ionization Mass Spectrometer (AIMS allow a validation of retrieved values for three flights in the upper troposphere/lowermost stratosphere region spanning polar and sub-tropical latitudes. A high correlation is achieved between the remote sensing and the in situ trace gas data, and discrepancies can to a large extent be attributed to differences in the probed air masses caused by different sampling characteristics of the instruments. This 1-D processing of GLORIA dynamics mode spectra provides the basis for future tomographic inversions from circular and linear flight paths to better understand selected dynamical processes of the upper troposphere and lowermost stratosphere.

  1. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  2. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration unce...

  3. Environmentally-acceptable fossil energy site evaluation and selection: methodology and user's guide. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Northrop, G.M.

    1980-02-01

    This report is designed to facilitate assessments of environmental and socioeconomic impacts of fossil energy conversion facilities which might be implemented at potential sites. The discussion of methodology and the User's Guide contained herein are presented in a format that assumes the reader is not an energy technologist. Indeed, this methodology is meant for application by almost anyone with an interest in a potential fossil energy development - planners, citizen groups, government officials, and members of industry. It may also be of instructional value. The methodology is called: Site Evaluation for Energy Conversion Systems (SELECS) and is organized in three levels of increasing sophistication. Only the least complicated version - the Level 1 SELECS - is presented in this document. As stated above, it has been expressly designed to enable just about anyone to participate in evaluating the potential impacts of a proposed energy conversion facility. To accomplish this objective, the Level 1 calculations have been restricted to ones which can be performed by hand in about one working day. Data collection and report preparation may bring the total effort required for a first or one-time application to two to three weeks. If repeated applications are made in the same general region, the assembling of data for a different site or energy conversion technology will probably take much less time.

  4. Use of calibration methodology of gamma cameras for the workers surveillance using a thyroid simulator; Uso de una metodologia de calibracion de camaras gamma para la vigilancia de trabajadores usando un simulador de tiroides

    Energy Technology Data Exchange (ETDEWEB)

    Alfaro, M.; Molina, G.; Vazquez, R.; Garcia, O., E-mail: mercedes.alfaro@inin.gob.m [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2010-09-15

    In Mexico there are a significant number of nuclear medicine centers in operation. For what the accidents risk related to the transport and manipulation of open sources used in nuclear medicine can exist. The National Institute of Nuclear Research (ININ) has as objective to establish a simple and feasible methodology for the workers surveillance related with the field of the nuclear medicine. This radiological surveillance can also be applied to the public in the event of a radiological accident. To achieve this it intends to use the available equipment s in the nuclear medicine centers, together with the neck-thyroid simulators elaborated by the ININ to calibrate the gamma cameras. The gamma cameras have among their component elements that conform spectrometric systems like the employees in the evaluation of the internal incorporation for direct measurements, reason why, besides their use for diagnostic for image, they can be calibrated with anthropomorphic simulators and also with punctual sources for the quantification of the radionuclides activity distributed homogeneously in the human body, or located in specific organs. Inside the project IAEA-ARCAL-RLA/9/049-LXXVIII -Procedures harmonization of internal dosimetry- where 9 countries intervened (Argentina, Brazil, Colombia, Cuba, Chile, Mexico, Peru, Uruguay and Spain). It was developed a protocol of cameras gamma calibration for the determination in vivo of radionuclides. The protocol is the base to establish and integrated network in Latin America to attend in response to emergencies, using nuclear medicine centers of public hospitals of the region. The objective is to achieve the appropriate radiological protection of the workers, essential for the sure and acceptable radiation use, the radioactive materials and the nuclear energy. (Author)

  5. Nuclear Dynamics Consequence Analysis (NDCA) for the Disposal of Spent Nuclear Fuel in an Underground Geologic Repository--Volume 2: Methodology and Results

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, L.L.; Wilson, J.R.; Sanchez, L.C.; Aguilar, R.; Trellue, H.R.; Cochrane, K.; Rath, J.S.

    1998-10-01

    The US Department of Energy Office of Environmental Management's (DOE/EM's) National Spent Nuclear Fuel Program (NSNFP), through a collaboration between Sandia National Laboratories (SNL) and Idaho National Engineering and Environmental Laboratory (INEEL), is conducting a systematic Nuclear Dynamics Consequence Analysis (NDCA) of the disposal of SNFs in an underground geologic repository sited in unsaturated tuff. This analysis is intended to provide interim guidance to the DOE for the management of the SNF while they prepare for final compliance evaluation. This report presents results from a Nuclear Dynamics Consequence Analysis (NDCA) that examined the potential consequences and risks of criticality during the long-term disposal of spent nuclear fuel owned by DOE-EM. This analysis investigated the potential of post-closure criticality, the consequences of a criticality excursion, and the probability frequency for post-closure criticality. The results of the NDCA are intended to provide the DOE-EM with a technical basis for measuring risk which can be used for screening arguments to eliminate post-closure criticality FEPs (features, events and processes) from consideration in the compliance assessment because of either low probability or low consequences. This report is composed of an executive summary (Volume 1), the methodology and results of the NDCA (Volume 2), and the applicable appendices (Volume 3).

  6. Methodological Principles of Assessing the Volume of Investment Influx from Non-State Pension Funds into the Economy of Ukraine

    Directory of Open Access Journals (Sweden)

    Dmitro Leonov

    2004-11-01

    Full Text Available This article addresses the processes of forming investment resources from nonstate pension funds under current conditions in Ukraine and the laws and regula tions that define the principles of the formation of in vestment institutions. Based on factors that in the near est future will affect the decisionmaking process by which different kinds of investors make payments to non state pension funds, we develop a procedure for assessing the volume of investment influx from nonstate pension funds into the economy and propose a procedure for long and shortterm prognosis of the volume of investment in flux from nonstate pension funds into the Ukrainian economy.

  7. Transport of solid commodities via freight pipeline: cost estimating methodology. Volume III, parts A and B. First year final report

    Energy Technology Data Exchange (ETDEWEB)

    Warner, J.A.; Morlok, E.K.; Gimm, K.K.; Zandi, I.

    1976-07-01

    In order to examine the feasibility of an intercity freight pipeline, it was necessary to develop cost equations for various competing transportation modes. This volume presents cost-estimating equations for rail carload, trailer-on-flatcar, truck, and freight pipeline. Section A presents mathematical equations that approximate the fully allocated and variable costs contained in the ICC cost tables for rail carload, trailer-on-flatcar (TOFC) and truck common-carrier intercity freight movements. These equations were developed to enable the user to approximate the ICC costs quickly and easily. They should find use in initial studies of costs where exact values are not needed, such as in consideration of rate changes, studies of profitability, and in general inter-modal comparisons. Section B discusses the development of a set of engineering cost equations for pneumo-capsule pipelines. The development was based on an analysis of system components and can readily be extended to other types of pipeline. The model was developed for the purpose of a feasibility study. It employs a limited number of generalized parameters and its use is recommended when sufficient detailed and specific engineering information is lacking. These models were used in the comparison of modes presented in Volume I and hence no conclusions regarding relative costs or service of the modes are presented here. The primary conclusion is that the estimates of costs resulting from these models is subject to considerable uncertainty.

  8. Built-in-Self-Test and Digital Self-Calibration for RF SoCs

    CERN Document Server

    Bou-Sleiman, Sleiman

    2012-01-01

    This book will introduce design methodologies, known as Built-in-Self-Test (BiST) and Built-in-Self-Calibration (BiSC), which enhance the robustness of radio frequency (RF) and millimeter wave (mmWave) integrated circuits (ICs). These circuits are used in current and emerging communication, computing, multimedia and biomedical products and microchips. The design methodologies presented will result in enhancing the yield (percentage of working chips in a high volume run) of RF and mmWave ICs which will enable successful manufacturing of such microchips in high volume

  9. Evaluation of severe accident risks: Methodology for the containment, source term, consequence, and risk integration analyses; Volume 1, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Gorham, E.D.; Breeding, R.J.; Brown, T.D.; Harper, F.T. [Sandia National Labs., Albuquerque, NM (United States); Helton, J.C. [Arizona State Univ., Tempe, AZ (United States); Murfin, W.B. [Technadyne Engineering Consultants, Inc., Albuquerque, NM (United States); Hora, S.C. [Hawaii Univ., Hilo, HI (United States)

    1993-12-01

    NUREG-1150 examines the risk to the public from five nuclear power plants. The NUREG-1150 plant studies are Level III probabilistic risk assessments (PRAs) and, as such, they consist of four analysis components: accident frequency analysis, accident progression analysis, source term analysis, and consequence analysis. This volume summarizes the methods utilized in performing the last three components and the assembly of these analyses into an overall risk assessment. The NUREG-1150 analysis approach is based on the following ideas: (1) general and relatively fast-running models for the individual analysis components, (2) well-defined interfaces between the individual analysis components, (3) use of Monte Carlo techniques together with an efficient sampling procedure to propagate uncertainties, (4) use of expert panels to develop distributions for important phenomenological issues, and (5) automation of the overall analysis. Many features of the new analysis procedures were adopted to facilitate a comprehensive treatment of uncertainty in the complete risk analysis. Uncertainties in the accident frequency, accident progression and source term analyses were included in the overall uncertainty assessment. The uncertainties in the consequence analysis were not included in this assessment. A large effort was devoted to the development of procedures for obtaining expert opinion and the execution of these procedures to quantify parameters and phenomena for which there is large uncertainty and divergent opinions in the reactor safety community.

  10. Methodological approaches to planar and volumetric scintigraphic imaging of small volume targets with high spatial resolution and sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Faculdade de Medicina. Dept. de Biologia Molecular], e-mail: mejia_famerp@yahoo.com.br; Braga, J. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Div. de Astrofisica; Correa, R. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Ciencia Espacial e Atmosferica; Leite, J.P. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Neurologia, Psiquiatria e Psicologia Medica; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica

    2009-08-15

    Single-photon emission computed tomography (SPECT) is a non-invasive imaging technique, which provides information reporting the functional states of tissues. SPECT imaging has been used as a diagnostic tool in several human disorders and can be used in animal models of diseases for physiopathological, genomic and drug discovery studies. However, most of the experimental models used in research involve rodents, which are at least one order of magnitude smaller in linear dimensions than man. Consequently, images of targets obtained with conventional gamma-cameras and collimators have poor spatial resolution and statistical quality. We review the methodological approaches developed in recent years in order to obtain images of small targets with good spatial resolution and sensitivity. Multi pinhole, coded mask- and slit-based collimators are presented as alternative approaches to improve image quality. In combination with appropriate decoding algorithms, these collimators permit a significant reduction of the time needed to register the projections used to make 3-D representations of the volumetric distribution of target's radiotracers. Simultaneously, they can be used to minimize artifacts and blurring arising when single pinhole collimators are used. Representation images are presented, which illustrate the use of these collimators. We also comment on the use of coded masks to attain tomographic resolution with a single projection, as discussed by some investigators since their introduction to obtain near-field images. We conclude this review by showing that the use of appropriate hardware and software tools adapted to conventional gamma-cameras can be of great help in obtaining relevant functional information in experiments using small animals. (author)

  11. Radio Interferometric Calibration Using a Riemannian Manifold

    CERN Document Server

    Yatawatta, Sarod

    2013-01-01

    In order to cope with the increased data volumes generated by modern radio interferometers such as LOFAR (Low Frequency Array) or SKA (Square Kilometre Array), fast and efficient calibration algorithms are essential. Traditional radio interferometric calibration is performed using nonlinear optimization techniques such as the Levenberg-Marquardt algorithm in Euclidean space. In this paper, we reformulate radio interferometric calibration as a nonlinear optimization problem on a Riemannian manifold. The reformulated calibration problem is solved using the Riemannian trust-region method. We show that calibration on a Riemannian manifold has faster convergence with reduced computational cost compared to conventional calibration in Euclidean space.

  12. Semi-empirical approach for calibration of CR-39 detectors in diffusion chambers for radon measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pereyra A, P.; Lopez H, M. E. [Pontificia Universidad Catolica del Peru, Av. Universitaria 1801, San Miguel Lima 32 (Peru); Palacios F, D.; Sajo B, L. [Universidad Simon Bolivar, Laboratorio de Fisica Nuclear, Apartado 89000 Caracas (Venezuela, Bolivarian Republic of); Valdivia, P., E-mail: ppereyr@pucp.edu.pe [Universidad Nacional de Ingenieria, Av. Tupac Amaru s/n, Rimac, Lima 25 (Peru)

    2016-10-15

    Simulated and measured calibration of PADC detectors is given for cylindrical diffusion chambers employed in environmental radon measurements. The method is based on determining the minimum alpha energy (E{sub min}), average critical angle (<Θ{sub c}>), and fraction of {sup 218}Po atoms; the volume of the chamber (f{sub 1}), are compared to commercially available devices. Radon concentration for exposed detectors is obtained from induced track densities and the well-established calibration coefficient for NRPB monitor. Calibration coefficient of a PADC detector in a cylindrical diffusion chamber of any size is determined under the same chemical etching conditions and track analysis methodology. In this study the results of numerical examples and comparison between experimental calibration coefficients and simulation purpose made code. Results show that the developed method is applicable when uncertainties of 10% are acceptable. (Author)

  13. Generic methodology for calibrating profiling nacelle lidars

    OpenAIRE

    Borraccino, Antoine; Courtney, Michael; Wagner, Rozenn

    2015-01-01

    Improving power performance assessment by measuring at different heights has been demonstrated using ground-based profiling LIDARs. More recently, nacelle-mounted lidars studies have shown promising capabilities to assess power performance. Using nacelle lidars avoids the erection of expensive meteorology masts, especially offshore. A new generation of commercially developed profiling nacelle lidars has sophisticated measurement capabilities.As for any other measuring system, lidars measureme...

  14. SeaWiFS Postlaunch Technical Report Series. Volume 5; The SeaWiFS Solar Radiation-Based Calibration and the Transfer-to-Orbit Experiment

    Science.gov (United States)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Barnes, Robert A.; Eplee, Robert E., Jr.; Biggar, Stuart F.; Thome, Kurtis J.; Zalewski, Edward F.; Slater, Philip N.; Holmes, Alan W.

    1999-01-01

    The solar radiation-based calibration (SRBC) of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) was performed on 1 November 1993. Measurements were made outdoors in the courtyard of the instrument manufacturer. SeaWiFS viewed the solar irradiance reflected from the sensor's diffuser in the same manner as viewed on orbit. The calibration included measurements using a solar radiometer designed to determine the transmittances of principal atmospheric constituents. The primary uncertainties in the outdoor measurements are the transmission of the atmosphere and the reflectance of the diffuser. Their combined uncertainty is about 5 or 6%. The SRBC also requires knowledge of the extraterrestrial solar spectrum. Four solar models are used. When averaged over the responses of the SeaWiFS bands, the irradiance models agree at the 3.6% level, with the greatest difference for SeaWiFS band 8. The calibration coefficients from the SRBC are lower than those from the laboratory calibration of the instrument in 1997. For a representative solar model, the ratios of the SRBC coefficients to laboratory values average 0.962 with a standard deviation of 0.012. The greatest relative difference is 0.946 for band 8. These values are within the estimated uncertainties of the calibration measurements. For the transfer-to-orbit experiment, the measurements in the manufacturer's courtyard are used to predict the digital counts from the instrument on its first day on orbit (August 1, 1997). This experiment requires an estimate of the relative change in the diffuser response for the period between the launch of the instrument and its first solar measurements on orbit (September 9, 1997). In relative terms, the counts from the instrument on its first day on orbit averaged 1.3% higher than predicted, with a standard deviation of 1.2% and a greatest difference of 2.4% or band 7. The estimated uncertainty for the transfer-to-orbit experiment is about 3 or 4%.

  15. Methodology for calibration of detector of Nal (TI) 3 'X 3' for measurements in vivo of patients with hyperthyroidism undergoing radioiodine therapy; Metodologia para calibracao de detector de Nal(TI) 3'X3' para medicoes in vivo em pacientes portadores de hipertireoidismo submetidos a radioiodoterapia

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Carlaine B.; Lacerda, Isabelle V.B.; Oliveira, Mercia L.; Hazin, Clovis A.; Lima, Fabiana F., E-mail: carlaine.carvalho@gmail.com, E-mail: bellelacerda@hotmail.com, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br, E-mail: fflima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-Ne/CNEN-PE), Recife, PE (Brazil)

    2013-11-01

    The aim of this study is to establish the methodology for calibration of the detection system to be used in determining the therapeutic activity of {sup 131}I required to release the desired absorbed dose in the thyroid. This step is critical to the development of a protocol for individualized doses. The system consists of a detector of NaI (Tl ) 3 'x3' coupled to Genie 2000 software. The calibration sources of {sup 60}CO, {sup 137}Cs and {sup 133}Ba were used. Obtained straight calibration system, with {sup 60}CO and {sup 137}Cs sources. Subsequently, the detector was calibrated using a simulator -neck thyroid designed and produced by the IRD/CNEN with known standard solution containing 18.7 kBq {sup 133}Ba activity (in 12/09/24) evenly distributed. He was also calibrated with other thyroid - neck phantom Model 3108 manufactured by Searle Radigraphics Ind., containing a net source of {sup 131}I ( 7.7 MBq ). Five measurements of five minutes were realized for three different distances detector simulator, and the respective calculated calibration factors was performed to three. The values of the calibration factors found for the simulator manufactured by IRD and the Searle Radigraphics Ind. for the distances 20 , 25 and 30cm were 0.35, 0.24, 0.18, and 0.15, 0.11, 0.09, respectively. With the detection system properly calibrated and the calibration factors established, the technique is suitable for the evaluation of diagnostic activity of {sup 131}I incorporated by hyperthyroidism.

  16. Methodology for calibration of detector of NaI (TI)) 3 ' X 3 ' for in vivo measurements of patients with hyperthyroidism undergoing to radioiodotherapy; Metodologia para calibracao de detector de NaI(TI) ) 3'X3' para medicoes in vivo em pacientes portadores de hipertireoidismo submetidos a radioiodoterapia

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Carlaine B.; Lacerda, Isabelle V.B.; Oliveira, Mercia L.; Hazin, Clovis A., E-mail: carlaine.carvalho@gmail.com, E-mail: bellelacerda@hotmail.com, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Fabiana F., E-mail: fflima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-10-01

    The aim of this study is to establish the methodology for calibration of the detection system to be used in determining the therapeutic activity of {sup 131}I required to release desired absorbed dose in the thyroid gland . This step is critical to the development of a protocol for individualized doses. The system consists of a detector of NaI (Tl ) 3'x3' coupled to software Genie 2000. We used the calibration sources of {sup 60}Co , {sup 137}Cs and {sup 133}Ba. We obtained the straight calibration system, with sources {sup 60}Co and {sup 137}Cs. Subsequently , the detector was calibrated using a thyroid phantom-neck designed and produced by the IRD / CNEN with known activity of {sup 133}Ba standard solution containing 18.7 kBq (on 09/24/12) evenly distributed. He was also calibrated with other thyroid- neck phantom model 3108 manufactured by Searle Radigraphics Ind., containing a liquid source of {sup 131}I ( 7.7 MBq ). Five measurements were performed during 5 minutes for three different distances detector-simulator and calculated the corresponding calibration factors . The values of the calibration factors found for the simulator made by IRD and Searle Radigraphics Ind. for the distances 20, 25 and 30 cm were 0.35 , 0.24, 0.18, 0.15 , 0.11, 0, 09 , respectively. With the detection system properly calibrated and the calibration factors established, the technique is suitable for the evaluation of diagnostic activities of {sup 131}I incorporated by hyperthyroid patients. (author)

  17. Influence of Software Tool and Methodological Aspects of Total Metabolic Tumor Volume Calculation on Baseline [18F]FDG PET to Predict Survival in Hodgkin Lymphoma.

    Directory of Open Access Journals (Sweden)

    Salim Kanoun

    Full Text Available To investigate the respective influence of software tool and total metabolic tumor volume (TMTV0 calculation method on prognostic stratification of baseline 2-deoxy-2-[18F]fluoro-D-glucose positron emission tomography ([18F]FDG-PET in newly diagnosed Hodgkin lymphoma (HL.59 patients with newly diagnosed HL were retrospectively included. [18F]FDG-PET was performed before any treatment. Four sets of TMTV0 were calculated with Beth Israel (BI software: based on an absolute threshold selecting voxel with standardized uptake value (SUV >2.5 (TMTV02.5, applying a per-lesion threshold of 41% of the SUV max (TMTV041 and using a per-patient adapted threshold based on SUV max of the liver (>125% and >140% of SUV max of the liver background; TMTV0125 and TMTV0140. TMTV041 was also determined with commercial software for comparison of software tools. ROC curves were used to determine the optimal threshold for each TMTV0 to predict treatment failure.Median follow-up was 39 months. There was an excellent correlation between TMTV041 determined with BI and with the commercial software (r = 0.96, p<0.0001. The median TMTV0 value for TMTV041, TMTV02.5, TMTV0125 and TMTV0140 were respectively 160 (used as reference, 210 ([28;154] p = 0.005, 183 ([-4;114] p = 0.06 and 143 ml ([-58;64] p = 0.9. The respective optimal TMTV0 threshold and area under curve (AUC for prediction of progression free survival (PFS were respectively: 313 ml and 0.70, 432 ml and 0.68, 450 ml and 0.68, 330 ml and 0.68. There was no significant difference between ROC curves. High TMTV0 value was predictive of poor PFS in all methodologies: 4-years PFS was 83% vs 42% (p = 0.006 for TMTV02.5, 83% vs 41% (p = 0.003 for TMTV041, 85% vs 40% (p<0.001 for TMTV0125 and 83% vs 42% (p = 0.004 for TMTV0140.In newly diagnosed HL, baseline metabolic tumor volume values were significantly influenced by the choice of the method used for determination of volume. However, no significant differences were found

  18. LOFAR facet calibration

    CERN Document Server

    van Weeren, R J; Hardcastle, M J; Shimwell, T W; Rafferty, D A; Sabater, J; Heald, G; Sridhar, S S; Dijkema, T J; Brunetti, G; Brüggen, M; Andrade-Santos, F; Ogrean, G A; Röttgering, H J A; Dawson, W A; Forman, W R; de Gasperin, F; Jones, C; Miley, G K; Rudnick, L; Sarazin, C L; Bonafede, A; Best, P N; Bîrzan, L; Cassano, R; Chyży, K T; Croston, J H; Ensslin, T; Ferrari, C; Hoeft, M; Horellou, C; Jarvis, M J; Kraft, R P; Mevius, M; Intema, H T; Murray, S S; Orrú, E; Pizzo, R; Simionescu, A; Stroe, A; van der Tol, S; White, G J

    2016-01-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at $\\sim$ 5arcsec resolu...

  19. Geometric calibration of high-resolution remote sensing sensors

    Institute of Scientific and Technical Information of China (English)

    LIANG Hong-you; GU Xing-fa; TAO Yu; QIAO Chao-fei

    2007-01-01

    This paper introduces the applications of high-resolution remote sensing imagery and the necessity of geometric calibration for remote sensing sensors considering assurance of the geometric accuracy of remote sensing imagery. Then the paper analyzes the general methodology of geometric calibration. Taking the DMC sensor geometric calibration as an example, the paper discusses the whole calibration procedure. Finally, it gave some concluding remarks on geometric calibration of high-resolution remote sensing sensors.

  20. DebrisInterMixing-2.3: a finite volume solver for three-dimensional debris-flow simulations with two calibration parameters - Part 1: Model description

    Science.gov (United States)

    von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian W.; Rickenmann, Dieter; Kirchner, James W.

    2016-08-01

    Here, we present a three-dimensional fluid dynamic solver that simulates debris flows as a mixture of two fluids (a Coulomb viscoplastic model of the gravel mixed with a Herschel-Bulkley representation of the fine material suspension) in combination with an additional unmixed phase representing the air and the free surface. We link all rheological parameters to the material composition, i.e., to water content, clay content, and mineral composition, content of sand and gravel, and the gravel's friction angle; the user must specify only two free model parameters. The volume-of-fluid (VoF) approach is used to combine the mixed phase and the air phase into a single cell-averaged Navier-Stokes equation for incompressible flow, based on code adapted from standard solvers of the open-source CFD software OpenFOAM. This effectively single-phase mixture VoF method saves computational costs compared to the more sophisticated drag-force-based multiphase models. Thus, complex three-dimensional flow structures can be simulated while accounting for the pressure- and shear-rate-dependent rheology.

  1. Revision, calibration, and application of the volume method to evaluate the geothermal potential of some recent volcanic areas of Latium, Italy

    Energy Technology Data Exchange (ETDEWEB)

    Doveri, Marco; Lelli, Matteo; Raco, Brunella [Institute of Geosciences and Georesources, CNR, Area della Ricerca, Via G. Moruzzi 1, I-56124 Pisa (Italy); Marini, Luigi [Laboratory of Geochemistry, Dip.Te.Ris., University of Genova, Corso Europa 26, I-16132 Genova (Italy); Institute of Geosciences and Georesources, CNR, Area della Ricerca, Via G. Moruzzi 1, I-56124 Pisa (Italy)

    2010-09-15

    The volume method is used to evaluate the productive potential of unexploited and minimally exploited geothermal fields. The distribution of P{sub CO2} in shallow groundwaters delimits the geothermal fields. This approach is substantiated by the good correspondence between zones of high CO{sub 2} flux, and the areal extension of explored geothermal systems of high enthalpy (Monte Amiata and Latera), medium enthalpy (Torre Alfina) and low enthalpy (Viterbo). Based on the data available for geothermal fields either under exploitation or investigated by long-term production tests, a specific productivity of 40 t h{sup -1} km{sup -3} is assumed. The total potential productivity for the recent volcanic areas of Latium is about 28 x 10{sup 3} t h{sup -1}, with 75% from low-enthalpy geothermal fields, 17% from medium-enthalpy systems, and 8% from high-enthalpy reservoirs. The total extractable thermal power is estimated to be 2220-2920 MW, 49-53% from low-enthalpy geothermal fields, 28-32% from medium-enthalpy systems, and 19-20% from high-enthalpy reservoirs. (author)

  2. N+3 Aircraft Concept Designs and Trade Studies. Volume 2; Appendices-Design Methodologies for Aerodynamics, Structures, Weight, and Thermodynamic Cycles

    Science.gov (United States)

    Greitzer, E. M.; Bonnefoy, P. A.; delaRosaBlanco, E.; Dorbian, C. S.; Drela, M.; Hall, D. K.; Hansman, R. J.; Hileman, J. I.; Liebeck, R. H.; Lovegren, J.; Mody, P.; Pertuze, J. A.; Sato, S.; Spakovszky, Z. S.; Tan, C. S.; Hollman, J. S.; Duda, J. E.; Fitzgerald, N.; Houghton, J.; Kerrebrock, J. L.; Kiwada, G. F.; Kordonowy, D.; Parrish, J. C.; Tylko, J.; Wen, E. A.

    2010-01-01

    Appendices A to F present the theory behind the TASOPT methodology and code. Appendix A describes the bulk of the formulation, while Appendices B to F develop the major sub-models for the engine, fuselage drag, BLI accounting, etc.

  3. Development of an Operational Calibration Methodology for the Landsat Thermal Data Archive and Initial Testing of the Atmospheric Compensation Component of a Land Surface Temperature (LST Product from the Archive

    Directory of Open Access Journals (Sweden)

    Monica Cook

    2014-11-01

    Full Text Available The Landsat program has been producing an archive of thermal imagery that spans the globe and covers 30 years of the thermal history of the planet at human scales (60–120 m. Most of that archive’s absolute radiometric calibration has been fixed through vicarious calibration techniques. These calibration ties to trusted values have often taken a year or more to gather sufficient data and, in some cases, it has been over a decade before calibration certainty has been established. With temperature being such a critical factor for all living systems and the ongoing concern over the impacts of climate change, NASA and the United States Geological Survey (USGS are leading efforts to provide timely and accurate temperature data from the Landsat thermal data archive. This paper discusses two closely related advances that are critical steps toward providing timely and reliable temperature image maps from Landsat. The first advance involves the development and testing of an autonomous procedure for gathering and performing initial screening of large amounts of vicarious calibration data. The second advance discussed in this paper is the per-pixel atmospheric compensation of the data to permit calculation of the emitted surface radiance (using ancillary sources of emissivity data and the corresponding land surface temperature (LST.

  4. Calibration of the SNO+ experiment

    Science.gov (United States)

    Maneira, J.; Falk, E.; Leming, E.; Peeters, S.; SNO+ collaboration.

    2017-09-01

    The main goal of the SNO+ experiment is to perform a low-background and high-isotope-mass search for neutrinoless double-beta decay, employing 780 tonnes of liquid scintillator loaded with tellurium, in its initial phase at 0.5% by mass for a total mass of 1330 kg of 130Te. The SNO+ physics program includes also measurements of geo- and reactor neutrinos, supernova and solar neutrinos. Calibrations are an essential component of the SNO+ data-taking and analysis plan. The achievement of the physics goals requires both an extensive and regular calibration. This serves several goals: the measurement of several detector parameters, the validation of the simulation model and the constraint of systematic uncertainties on the reconstruction and particle identification algorithms. SNO+ faces stringent radiopurity requirements which, in turn, largely determine the materials selection, sealing and overall design of both the sources and deployment systems. In fact, to avoid frequent access to the inner volume of the detector, several permanent optical calibration systems have been developed and installed outside that volume. At the same time, the calibration source internal deployment system was re-designed as a fully sealed system, with more stringent material selection, but following the same working principle as the system used in SNO. This poster described the overall SNO+ calibration strategy, discussed the several new and innovative sources, both optical and radioactive, and covered the developments on source deployment systems.

  5. Methodologies for localizing loco-regional hypopharyngeal carcinoma recurrences in relation to FDG-PET positive and clinical radiation therapy target volumes

    DEFF Research Database (Denmark)

    Due, Anne Kirkebjerg; Korreman, Stine; Bentzen, Søren M;

    2010-01-01

    Focal methods to determine the source of recurrence are presented, tested for reproducibility and compared to volumetric approaches with respect to the number of recurrences ascribed to the FDG-PET positive and high dose volumes....

  6. Methodologies for localizing loco-regional hypopharyngeal carcinoma recurrences in relation to FDG-PET positive and clinical radiation therapy target volumes

    DEFF Research Database (Denmark)

    Due, Anne Kirkebjerg; Korreman, Stine Sofia; Tomé, Wolfgang;

    2010-01-01

    Focal methods to determine the source of recurrence are presented, tested for reproducibility and compared to volumetric approaches with respect to the number of recurrences ascribed to the FDG-PET positive and high dose volumes.......Focal methods to determine the source of recurrence are presented, tested for reproducibility and compared to volumetric approaches with respect to the number of recurrences ascribed to the FDG-PET positive and high dose volumes....

  7. Architectural and Behavioral Systems Design Methodology and Analysis for Optimal Habitation in a Volume-Limited Spacecraft for Long Duration Flights

    Science.gov (United States)

    Kennedy, Kriss J.; Lewis, Ruthan; Toups, Larry; Howard, Robert; Whitmire, Alexandra; Smitherman, David; Howe, Scott

    2016-01-01

    As our human spaceflight missions change as we reach towards Mars, the risk of an adverse behavioral outcome increases, and requirements for crew health, safety, and performance, and the internal architecture, will need to change to accommodate unprecedented mission demands. Evidence shows that architectural arrangement and habitability elements impact behavior. Net habitable volume is the volume available to the crew after accounting for elements that decrease the functional volume of the spacecraft. Determination of minimum acceptable net habitable volume and associated architectural design elements, as mission duration and environment varies, is key to enabling, maintaining, andor enhancing human performance and psychological and behavioral health. Current NASA efforts to derive minimum acceptable net habitable volumes and study the interaction of covariates and stressors, such as sensory stimulation, communication, autonomy, and privacy, and application to internal architecture design layouts, attributes, and use of advanced accommodations will be presented. Furthermore, implications of crew adaptation to available volume as they transfer from Earth accommodations, to deep space travel, to planetary surface habitats, and return, will be discussed.

  8. NASA AURA HIRDLS instrument calibration facility

    Science.gov (United States)

    Hepplewhite, Christopher L.; Barnett, John J.; Watkins, Robert E. J.; Row, Frederick; Wolfenden, Roger; Djotni, Karim; Oduleye, Olusoji O.; Whitney, John G.; Walton, Trevor W.; Arter, Philip I.

    2003-11-01

    A state-of-the-art calibration facility was designed and built for the calibration of the HIRDLS instrument at the University of Oxford, England. This paper describes the main features of the facility, the driving requirements and a summary of the performance that was achieved during the calibration. Specific technical requirements and a summary of the performance that was achieved during the calibration. Specific technical requirements and other constaints determined the design solutions that were adopted and the implementation methodology. The main features of the facility included a high performance clean room, vacuum chamber with thermal environmental control as well as the calibration sources. Particular attention was paid to maintenance of cleanliness (molecular and particulate), ESD control, mechanical isolation and high reliability. Schedule constraints required that all the calibration sources were integrated into the facility so that the number of re-press and warm up cycles was minimized and so that all the equipment could be operated at the same time.

  9. Normative price for a manufactured product: the SAMICS methodology. Volume II. Analysis. JPL publication 78-98. [Solar Array Manufacturing Industry Costing Standards

    Energy Technology Data Exchange (ETDEWEB)

    Chamberlain, R.G.

    1979-01-15

    The Solar Array Manufacturing Industry Costing Standards (SAMICS) provide standard formats, data, assumptions, and procedures for determining the price a hypothetical solar array manufacturer would have to be able to obtain in the market to realize a specified after-tax rate of return on equity for a specified level of production. This document presents the methodology and its theoretical background. It is contended that the model is sufficiently general to be used in any production-line manufacturing environment. Implementation of this methodology by the Solar Array Manufacturing Industry Simulation computer program (SAMIS III, Release 1) is discussed.

  10. A normative price for energy from an electricity generation system: An Owner-dependent Methodology for Energy Generation (system) Assessment (OMEGA). Volume 1: Summary

    Science.gov (United States)

    Chamberlain, R. G.; Mcmaster, K. M.

    1981-01-01

    The utility owned solar electric system methodology is generalized and updated. The net present value of the system is determined by consideration of all financial benefits and costs (including a specified return on investment). Life cycle costs, life cycle revenues, and residual system values are obtained. Break even values of system parameters are estimated by setting the net present value to zero. While the model was designed for photovoltaic generators with a possible thermal energy byproduct, it applicability is not limited to such systems. The resulting owner-dependent methodology for energy generation system assessment consists of a few equations that can be evaluated without the aid of a high-speed computer.

  11. SURF Model Calibration Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.

  12. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina

    2016-05-02

    This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.

  13. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina

    2016-05-02

    This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.

  14. Calibration of sound calibrators: an overview

    Science.gov (United States)

    Milhomem, T. A. B.; Soares, Z. M. D.

    2016-07-01

    This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.

  15. Intercomparison and calibration of dose calibrators used in nuclear medicine facilities

    CERN Document Server

    Costa, A M D

    2003-01-01

    The aim of this work was to establish a working standard for intercomparison and calibration of dose calibrators used in most of nuclear medicine facilities for the determination of the activity of radionuclides administered to patients in specific examinations or therapeutic procedures. A commercial dose calibrator, a set of standard radioactive sources, and syringes, vials and ampoules with radionuclide solutions used in nuclear medicine were utilized in this work. The commercial dose calibrator was calibrated for radionuclide solutions used in nuclear medicine. Simple instrument tests, such as linearity response and variation response with the source volume at a constant source activity concentration were performed. This instrument may be used as a reference system for intercomparison and calibration of other activity meters, as a method of quality control of dose calibrators utilized in nuclear medicine facilities.

  16. Evaluation of automated decision making methodologies and development of an integrated robotic system simulation. Volume 2, Part 2. Appendixes b, c, d and e

    Energy Technology Data Exchange (ETDEWEB)

    Lowrie, J.W.; Fermelia, A.J.; Haley, D.C.; Gremban, K.D.; Vanbaalen, J.

    1982-09-01

    The derivation of the equations is presented, the rate control algorithm described, and simulation methodologies summarized. A set of dynamics equations that can be used recursively to calculate forces and torques acting at the joints of an n link manipulator given the manipulator joint rates are derived. The equations are valid for any n link manipulator system with any kind of joints connected in any sequence. The equations of motion for the class of manipulators consisting of n rigid links interconnected by rotary joints are derived. A technique is outlined for reducing the system of equations to eliminate contraint torques. The linearized dynamics equations for an n link manipulator system are derived. The general n link linearized equations are then applied to a two link configuration. The coordinated rate control algorithm used to compute individual joint rates when given end effector rates is described. A short discussion of simulation methodologies is presented.

  17. Study for the optimization of a transport aircraft wing for maximum fuel efficiency. Volume 1: Methodology, criteria, aeroelastic model definition and results

    Science.gov (United States)

    Radovcich, N. A.; Dreim, D.; Okeefe, D. A.; Linner, L.; Pathak, S. K.; Reaser, J. S.; Richardson, D.; Sweers, J.; Conner, F.

    1985-01-01

    Work performed in the design of a transport aircraft wing for maximum fuel efficiency is documented with emphasis on design criteria, design methodology, and three design configurations. The design database includes complete finite element model description, sizing data, geometry data, loads data, and inertial data. A design process which satisfies the economics and practical aspects of a real design is illustrated. The cooperative study relationship between the contractor and NASA during the course of the contract is also discussed.

  18. Estimation of the Joint Patient Condition Occurrence Frequencies from Operation Iraqi Freedom and Operation Enduring Freedom. Volume I: Development of Methodology

    Science.gov (United States)

    2011-03-28

    of Other Facial Bones FRAC CL FACE 805 Closed Fracture of Cervical Vertebra without Spinal Cord Injury FRAC CL CERVICAL VCI 806.2 Closed Fracture... Cervical Vertebra with Spinal Cord Injury FRAC OP CERVICAL SCI 806.3 Open Fracture of Dorsal Vertebra with Spinal Cord Injury FRAC OP THORACIC... Vertebra DISLOCATION CERVICAL VCIc 840.4 Rotator Cuff Sprain SPRAINS & STRAINS SHOULDER & UPPER ARM PCOF Vol. 1: Development of Methodology 30 DMMPO ICD

  19. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler.

    Science.gov (United States)

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-10-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications.

  20. A rational methodology for the study of foundations for marine structures; Una metdologia racional para el estudio de cimentaciones de estructuras marinas

    Energy Technology Data Exchange (ETDEWEB)

    Mira Mc Willams, P.; Fernandez-Merodo, J. A.; Pastor Perez, M.; Monte Saez, J. L.; Martinez Santamaria, J. M.; Cuellar Mirasol, V.; Martin Baanante, M. E.; Rodriguez Sanchez-Arevalo, I; Lopez Maldonando, J. D.; Tomas Sampedro, A.

    2011-07-01

    A methodology for the study of marine foundations is presented. The response in displacements, stresses and pore water pressures in obtained from a finite element coupled formulation. Loads due to wave action of the foundation are obtained from a volume of fluid type fluid-structure interaction numerical model. Additionally, the methodology includes a Generalized Plasticity based constitutive model for granular materials capable of representing liquefaction fenomena of sands subjected to cyclic loading, such as those frequently appearing in the problems studied. Calibration of this model requires a series of laboratory tests detailed herein. This methodology is applied to the study of the response of a caisson breakwater foundation. (Author) 10 refs.

  1. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    CERN Document Server

    Richards, Joseph W; Miller, Adam A; Bloom, Joshua S; Butler, Nathaniel R; Brink, Henrik; Crellin-Quick, Arien

    2012-01-01

    With growing data volumes from synoptic surveys, astronomers must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities, and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the pre...

  2. Study of calibration equations of {sup 137}Cs methodology for soil erosion determination; Estudo de equacoes de calibracao para metodologia do {sup 137}Cs de determinacao da erosao de solos

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Elias Antunes dos

    2001-02-01

    Using the method of {sup 137} Cs and gamma-ray spectrometry, soil samples of two plots erosion were studied at Londrina city. the soil class studied was a dystrophic dark red soil (LRd), with erosion indexes measured by Agronomic Institute of Parana State (IAPAR) using a conventional method, since 1976. Through the percentage reduction of {sup 137} Cs related to the reference site, the soil losses were calculated using the proportional, mass balance and profile distribution models. Making the correlation between the {sup 137} Cs concentrations and the erosion measured by IAPAR, two calibration equations were obtained and applied to the data set measured in the basin of the Unda river and compared to those models in the literature. As reference region, was chosen a natural forest located close to the plots. The average inventory of {sup 137} Cs was 555{+-} 16 Bq.m{sup -2}. The inventories of the erosion plots varied from 112 to 136 Bq.m{sup -2} for samples collected until 30 cm depth. The erosion rates estimated by the models varied from 64 to 85 ton.ha{sup -1}.yr{sup -1} for the proportional and profile distribution models, respectively, and 137 to 165 ton.ha{sup -1} for the mass balance model, while the measured erosion obtained by IAPAR was 86 ton.ha{sup -1}.yr{sup -1}. From the two calibration equations obtained, the one that take into account the {sup 137} Cs distribution with the soil profile was that showed the best consistence with the erosion rated for the basin of the Unda river (same soil class) in the range from 4 to 48 ton.ha{sup -1}.yr{sup -1}, while the proportional and profile distribution models applied rates from 7 to 45 ton.ha{sup -1}.yr{sup -1} and 6 to 69 ton.ha{sup -1}.yr{sup -1}, respectively. (author)

  3. Problem definition study on techniques and methodologies for evaluating the chemical and toxicological properties of combustion products of gun systems, Volume 1: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Ross, R.H.; Pal, B.C.; Lock, S.; Ramsey, R.S.; Jenkins, R.A.; Griest, W.H.; Guerin, M.R.

    1988-03-01

    Gun exhaust is a complex mixture of both organic and inorganic chemical species. Similar to other mixtures, it has both vapor and particulate phases. This report contains information concerning the chemical characterization of gun exhaust and offers recommendations for its chemical and toxicological investigation. Propellent compositions used in munitions are all nitrocellulose based but are categorized by the inclusion of the other major ingredients (i.e., single-based propellants contain nitrocellulose only, double-base propellants contain nitrocellulose and nitroglycerin, and triple-base propellants contain nitrocellulose, nitroguanidine, and nitroglycerin). The principal decomposition products present in gun exhaust are carbon monoxide, hydrogen, carbon dixiode, water, and nitrogen (approximately 99% by volume). A number of minor products have been reported to be present in gun exhaust including nitrogen oxides, ammonia, inorganic particulates (e.g., lead and copper), and polycylic aromatic hydrocarbons (of unknown origin). 233 refs., 19 figs., 19 tabs.

  4. Laser-induced incandescence calibration via gravimetric sampling

    Science.gov (United States)

    Choi, M. Y.; Vander Wal, R. L.; Zhou, Z.

    1996-01-01

    Absolute calibration of laser-induced incandescence (LII) is demonstrated via comparison of LII signal intensities with gravimetrically determined soot volume fractions. This calibration technique does not rely upon calculated or measured optical characteristics of soot. The variation of the LII signal with gravimetrically measured soot volume fractions ranging from 0.078 to 1.1 ppm established the linearly of the calibration. With the high spatial and temporal resolution capabilities of laser-induced incandescence (LII), the spatial and temporal fluctuations of the soot field within a gravimetric chimney were characterized. Radial uniformity of the soot volume fraction, f(sub v) was demonstrated with sufficient averaging of the single laser-shot LII images of the soot field thus confirming the validity of the calibration method for imaging applications. As illustration, instantaneous soot volume fractions within a Re = 5000 ethylene/air diffusion flame measured via planar LII were established quantitatively with this calibration.

  5. Infrared stereo calibration for unmanned ground vehicle navigation

    Science.gov (United States)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  6. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron [National Renewable Energy Laboratory, Golden CO 80401 USA; Sengupta, Manajit [National Renewable Energy Laboratory, Golden CO 80401 USA; Andreas, Afshin [National Renewable Energy Laboratory, Golden CO 80401 USA; Reda, Ibrahim [National Renewable Energy Laboratory, Golden CO 80401 USA; Robinson, Justin [GroundWork Renewables Inc., Logan UT 84321 USA

    2016-10-07

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference of +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.

  7. Trinocular Calibration Method Based on Binocular Calibration

    Directory of Open Access Journals (Sweden)

    CAO Dan-Dan

    2012-10-01

    Full Text Available In order to solve the self-occlusion problem in plane-based multi-camera calibration system and expand the measurement range, a tri-camera vision system based on binocular calibration is proposed. The three cameras are grouped into two pairs, while the public camera is taken as the reference to build the global coordinate. By calibration of the measured absolute distance and the true absolute distance, global calibration is realized. The MRE (mean relative error of the global calibration of the two camera pairs in the experiments can be as low as 0.277% and 0.328% respectively. Experiment results show that this method is feasible, simple and effective, and has high precision.

  8. Technical support document: Energy efficiency standards for consumer products: Room air conditioners, water heaters, direct heating equipment, mobile home furnaces, kitchen ranges and ovens, pool heaters, fluorescent lamp ballasts and television sets. Volume 1, Methodology

    Energy Technology Data Exchange (ETDEWEB)

    1993-11-01

    The Energy Policy and Conservation Act (P.L. 94-163), as amended, establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. DOE is currently considering amending standards for seven types of products: water heaters, direct heating equipment, mobile home furnaces, pool heaters, room air conditioners, kitchen ranges and ovens (including microwave ovens), and fluorescent light ballasts and is considering establishing standards for television sets. This Technical Support Document presents the methodology, data, and results from the analysis of the energy and economic impacts of the proposed standards. This volume presents a general description of the analytic approach, including the structure of the major models.

  9. Guidelines for the verification and validation of expert system software and conventional software: Survey and documentation of expert system verification and validation methodologies. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    Groundwater, E.H.; Miller, L.A.; Mirsky, S.M. [Science Applications International Corp., McLean, VA (United States)

    1995-03-01

    This report is the third volume in the final report for the Expert System Verification and Validation (V&V) project which was jointly sponsored by the Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V&V of expert systems for use in nuclear power applications. The purpose of this activity was to survey and document techniques presently in use for expert system V&V. The survey effort included an extensive telephone interviewing program, site visits, and a thorough bibliographic search and compilation. The major finding was that V&V of expert systems is not nearly as established or prevalent as V&V of conventional software systems. When V&V was used for expert systems, it was almost always at the system validation stage after full implementation and integration usually employing the non-systematic dynamic method of {open_quotes}ad hoc testing.{close_quotes} There were few examples of employing V&V in the early phases of development and only weak sporadic mention of the possibilities in the literature. There is, however, a very active research area concerning the development of methods and tools to detect problems with, particularly, rule-based expert systems. Four such static-testing methods were identified which were not discovered in a comprehensive review of conventional V&V methods in an earlier task.

  10. Durability of recycled aggregate concrete designed with the Equivalent Mortar Volume (EMV method: Validation under the Spanish context and its adaptation to Bolomey methodology

    Directory of Open Access Journals (Sweden)

    Jiménez, C.

    2014-03-01

    Full Text Available Some durability properties are analyzed in concretes made with a novel method for recycled aggregates concrete (RAC proportioning, in order to validate it under the Spanish context. Two types of concrete mixes were elaborated; one following the guidelines of the named method, and other based on an adaptation of the method to Bolomey methodology. Two types of recycled concrete aggregates (RCA were used. RCA replacement for natural aggregates (NA ranged from 20% to 100%. The 20% was chosen in order to comply with Spanish recommendations. Water penetration under pressure, water absorption and chlorides attack were the studied properties. It is verified that the new method and the developed adaptation results in concrete mixes of better or similar properties to those of the natural aggregates concrete (NAC and the conventional RAC, saving important amounts of cement.Algunas propiedades de durabilidad son analizadas en hormigones elaborados con el nuevo método para la dosificación de hormigones con árido reciclado (HAR para validarlo bajo el contexto español. Se elaboraron dos tipos de hormigones; uno siguiendo las directrices del nuevo método y otro basado en una adaptación del anterior a la metodología Bolomey. Se utilizaron dos tipos de árido reciclado (ARH. Los reemplazos de áridos variaron entre 20% y 100%. El 20% ha sido elegido para cumplir con recomendaciones españolas sobre HAR. Las propiedades estudiadas fueron: penetración de agua bajo presión, absorción de agua y susceptibilidad al ataque de cloruros. Se verifica que el nuevo método y la adaptación desarrollada resultan en hormigones con mejores o similares características que las de un hormigón con áridos naturales (HAN y las de HAR convencional, ahorrando, además, importantes cantidades de cemento.

  11. Energy Performance Assessment of Radiant Cooling System through Modeling and Calibration at Component Level

    Energy Technology Data Exchange (ETDEWEB)

    Khan, Yasin [Malaviya National Institute of Technology (MNIT), Jaipur, India; Mathur, Jyotirmay [Malaviya National Institute of Technology (MNIT), Jaipur, India; Bhandari, Mahabir S [ORNL

    2016-01-01

    The paper describes a case study of an information technology office building with a radiant cooling system and a conventional variable air volume (VAV) system installed side by side so that performancecan be compared. First, a 3D model of the building involving architecture, occupancy, and HVAC operation was developed in EnergyPlus, a simulation tool. Second, a different calibration methodology was applied to develop the base case for assessing the energy saving potential. This paper details the calibration of the whole building energy model to the component level, including lighting, equipment, and HVAC components such as chillers, pumps, cooling towers, fans, etc. Also a new methodology for the systematic selection of influence parameter has been developed for the calibration of a simulated model which requires large time for the execution. The error at the whole building level [measured in mean bias error (MBE)] is 0.2%, and the coefficient of variation of root mean square error (CvRMSE) is 3.2%. The total errors in HVAC at the hourly are MBE = 8.7% and CvRMSE = 23.9%, which meet the criteria of ASHRAE 14 (2002) for hourly calibration. Different suggestions have been pointed out to generalize the energy saving of radiant cooling system through the existing building system. So a base case model was developed by using the calibrated model for quantifying the energy saving potential of the radiant cooling system. It was found that a base case radiant cooling system integrated with DOAS can save 28% energy compared with the conventional VAV system.

  12. Principal component and volume of interest analyses in depressed patients imaged by {sup 99m}Tc-HMPAO SPET: a methodological comparison

    Energy Technology Data Exchange (ETDEWEB)

    Pagani, Marco [Institute of Cognitive Sciences and Technologies, CNR, Rome (Italy); Section of Nuclear Medicine, Department of Hospital Physics, Karolinska Hospital, Stockholm (Sweden); Gardner, Ann; Haellstroem, Tore [NEUROTEC, Division of Psychiatry, Karolinska Institutet, Huddinge University Hospital, Stockholm (Sweden); Salmaso, Dario [Institute of Cognitive Sciences and Technologies, CNR, Rome (Italy); Sanchez Crespo, Alejandro; Jonsson, Cathrine; Larsson, Stig A. [Section of Nuclear Medicine, Department of Hospital Physics, Karolinska Hospital, Stockholm (Sweden); Jacobsson, Hans [Department of Radiology, Karolinska Hospital, Stockholm (Sweden); Lindberg, Greger [Department of Medicine, Division of Gastroenterology and Hepatology, Karolinska Institutet, Huddinge University Hospital, Stockholm (Sweden); Waegner, Anna [Department of Clinical Neuroscience, Division of Neurology, Karolinska Hospital, Stockholm (Sweden)

    2004-07-01

    Previous regional cerebral blood flow (rCBF) studies on patients with unipolar major depressive disorder (MDD) have analysed clusters of voxels or single regions and yielded conflicting results, showing either higher or lower rCBF in MDD as compared to normal controls (CTR). The aim of this study was to assess rCBF distribution changes in 68 MDD patients, investigating the data set with both volume of interest (VOI) analysis and principal component analysis (PCA). The rCBF distribution in 68 MDD and 66 CTR, at rest, was compared. Technetium-99m d,l-hexamethylpropylene amine oxime single-photon emission tomography was performed and the uptake in 27 VOIs, bilaterally, was assessed using a standardising brain atlas. Data were then grouped into factors by means of PCA performed on rCBF of all 134 subjects and based on all 54 VOIs. VOI analysis showed a significant group x VOI x hemisphere interaction (P<0.001). rCBF in eight VOIs (in the prefrontal, temporal, occipital and central structures) differed significantly between groups at the P<0.05 level. PCA identified 11 anatomo-functional regions that interacted with groups (P<0.001). As compared to CTR, MDD rCBF was relatively higher in right associative temporo-parietal-occipital cortex (P<0.01) and bilaterally in prefrontal (P<0.005) and frontal cortex (P<0.025), anterior temporal cortex and central structures (P<0.05 and P<0.001 respectively). Higher rCBF in a selected group of MDD as compared to CTR at rest was found using PCA in five clusters of regions sharing close anatomical and functional relationships. At the single VOI level, all eight regions showing group differences were included in such clusters. PCA is a data-driven method for recasting VOIs to be used for group evaluation and comparison. The appearance of significant differences absent at the VOI level emphasises the value of analysing the relationships among brain regions for the investigation of psychiatric disease. (orig.)

  13. The calibration of PIXIE

    Science.gov (United States)

    Fixsen, D. J.; Chuss, D. T.; Kogut, Alan; Mirel, Paul; Wollack, E. J.

    2016-07-01

    The FIRAS instrument demonstrated the use of an external calibrator to compare the sky to an instrumented blackbody. The PIXIE calibrator is improved from -35 dB to -65 dB. Another significant improvement is the ability to insert the calibrator into either input of the FTS. This allows detection and correction of additional errors, reduces the effective calibration noise by a factor of 2, eliminates an entire class of systematics and allows continuous observations. This paper presents the design and use of the PIXIE calibrator.

  14. Calibration of Geodetic Instruments

    Directory of Open Access Journals (Sweden)

    Marek Bajtala

    2005-06-01

    Full Text Available The problem of metrology and security systems of unification, correctness and standard reproducibilities belong to the preferred requirements of theory and technical practice in geodesy. Requirements on the control and verification of measured instruments and equipments increase and the importance and up-to-date of calibration get into the foreground. Calibration possibilities of length-scales (of electronic rangefinders and angle-scales (of horizontal circles of geodetic instruments. Calibration of electronic rangefinders on the linear comparative baseline in terrain. Primary standard of planar angle – optical traverse and its exploitation for calibration of the horizontal circles of theodolites. The calibration equipment of the Institute of Slovak Metrology in Bratislava. The Calibration process and results from the calibration of horizontal circles of selected geodetic instruments.

  15. Preliminary evaluation of a Neutron Calibration Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga, Talysson S.; Neves, Lucio P.; Perini, Ana P.; Sanches, Matias P.; Mitake, Malvina B.; Caldas, Linda V.E., E-mail: talvarenga@ipen.br, E-mail: lpneves@ipen.br, E-mail: aperini@ipen.br, E-mail: msanches@ipen.br, E-mail: mbmitake@ipen.br, E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Federico, Claudio A., E-mail: claudiofederico@ieav.cta.br [Instituto de Estudos Avancados (IEAv/DCTA), Sao Jose dos Campos, SP (Brazil). Dept. de Ciencia e Tecnologia Aeroespacial

    2013-07-01

    In the past few years, Brazil and several other countries in Latin America have experimented a great demand for the calibration of neutron detectors, mainly due to the increase in oil prospection and extraction. The only laboratory for calibration of neutron detectors in Brazil is localized at the Institute for Radioprotection and Dosimetry (IRD/CNEN), Rio de Janeiro, which is part of the IAEA SSDL network. This laboratory is the national standard laboratory in Brazil. With the increase in the demand for the calibration of neutron detectors, there is a need for another calibration services. In this context, the Calibration Laboratory of IPEN/CNEN, Sao Paulo, which already offers calibration services of radiation detectors with standard X, gamma, beta and alpha beams, has recently projected a new calibration laboratory for neutron detectors. In this work, the ambient equivalent dose rate (H⁎(10)) was evaluated in several positions inside and around this laboratory, using Monte Carlo simulation (MCNP5 code), in order to verify the adequateness of the shielding. The obtained results showed that the shielding is effective, and that this is a low-cost methodology to improve the safety of the workers and evaluate the total staff workload. (author)

  16. On methodology

    DEFF Research Database (Denmark)

    Cheesman, Robin; Faraone, Roque

    2002-01-01

    This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública".......This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública"....

  17. Structured light system calibration method with optimal fringe angle.

    Science.gov (United States)

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  18. The Science of Calibration

    Science.gov (United States)

    Kent, S. M.

    2016-05-01

    This paper presents a broad overview of the many issues involved in calibrating astronomical data, covering the full electromagnetic spectrum from radio waves to gamma rays, and considering both ground-based and space-based missions. These issues include the science drivers for absolute and relative calibration, the physics behind calibration and the mechanisms used to transfer it from the laboratory to an astronomical source, the need for networks of calibrated astronomical standards, and some of the challenges faced by large surveys and missions.

  19. Sensor Calibration in Support for NOAA's Satellite Mission

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Sensor calibration, including its definition, purpose, traceability options, methodology, complexity, and importance, is examined in this paper in the context of supporting NOAA's satellite mission. Common understanding of sensor calibration is essential for the effective communication among sensor vendors,calibration scientists, satellite operators, program managers, and remote sensing data users, who must cooperate to ensure that a nation's strategic investment in a sophisticated operational environmental satellite system serves the nation's interest and enhances the human lives around the world. Examples of calibration activities at NOAA/NESDIS/ORA are selected to further illustrate these concepts and to demonstrate the lessons learned from the past experience.

  20. An improved outdoor calibration procedure for broadband ultraviolet radiometers.

    Science.gov (United States)

    Cancillo, M L; Serrano, A; Antón, M; García, J A; Vilaplana, J M; de la Morena, B

    2005-01-01

    This article aims at improving the broadband ultraviolet radiometer's calibration methodology. For this goal, three broadband radiometers are calibrated against a spectrophotometer of reference. Three different one-step calibration models are tested: ratio, first order and second order. The latter is proposed in order to adequately reproduce the high dependence on the solar zenith angle shown by the other two models and, therefore, to improve the calibration performance at high solar elevations. The proposed new second-order model requires no additional information and, thus, keeps the operational character of the one-step methodology. The models are compared in terms of their root mean square error and the most qualified is subsequently validated by comparing its predictions with the spectrophotometer measurements within an independent validation data subset. Results show that the best calibration is achieved by the second-order model, with a mean bias error and mean absolute bias error lower than 2.2 and 6.7%, respectively.

  1. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  2. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  3. Calibration of modified parallel-plate rheometer through calibrated oil and lattice Boltzmann simulation

    DEFF Research Database (Denmark)

    Ferraris, Chiara F; Geiker, Mette Rica; Martys, Nicos S

    2007-01-01

    inapplicable here. This paper presents the analysis of a modified parallel plate rheometer for measuring cement mortar and propose a methodology for calibration using standard oils and numerical simulation of the flow. A lattice Boltzmann method was used to simulate the flow in the modified rheometer, thus...

  4. Forecasting Attrition Volume: A Methodological Development

    Science.gov (United States)

    2009-12-01

    de ressources humaines , comme le recrutement, la promotion, la planification et la préparation des budgets, ce rapport aura des répercussions... ressources humaines , comme le recrutement, la promotion, la planification et la préparation des budgets, ce rapport aura des répercussions positives...est essentielle pour assurer une gestion efficace de l’effectif. Depuis déjà un bon moment, la Direction générale – Recherche et analyse (Personnel

  5. Methodology Investigation, Program Flow Analyzer. Volume 2.

    Science.gov (United States)

    1985-09-01

    Address mode for each mnemonic. ASSMSPECS Int 4 (200) Special information about the instruction. N USER Int 4 Number of user-defined instructions. U3ER...USER SPECS Int 4 (100) Special information about each user instruction. -23- 4, 2.6.2 Select Instrumented Source Code File to be Processed This...ASSNSPECS Int 4 (200) Special information about the instruction. N USER Int 4 Number of user-defined 60 instructions. USER INSTS Char 8 (100) User

  6. Calibration of {sup 133}Ba by Sum-Peak Method

    Energy Technology Data Exchange (ETDEWEB)

    Silva, R.L. da; Delgado, J.U.; Poledna, R.; Trindade, O.L.; Veras, E.V. de; Santos, A.; Rangel, J., E-mail: ronaldo@ird.gov.br, E-mail: delgado@ird.gov.br, E-mail: poledna@ird.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Almeida, M.C.M, E-mail: marcandida@yahoo.com.br [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)

    2015-07-01

    A calibration laboratory should have several methods of measurement in order to ensure robustness on the values applied. The National Laboratory for Metrology of Ionizing Radiation, (LNMRI IRD), provides gamma sources of radionuclide in various geometries and standardized in activity with reduced uncertainties. Some absolute and relative methods of calibrations could be used routinely. Relative methods require standards to determine the activity of sample to be calibrated, while the absolute methods do not need, simply make the counting and the calculation of the activity is obtained directly. The great advantage of calibrations of radionuclides by absolute method is the accuracy and low uncertainties. {sup 133}Ba is a radionuclide enough used in research laboratories and calibration of detectors for environmental analysis and, according to the scheme, it decays 100% by electron capture and emits about 14 energy gamma and X-ray lines, forming several coincidences. However, the classical methods of absolute measurement, as coincidence 4 πβ-γ have difficulty to calibrate {sup 133}Ba due to its complex decay scheme. The sum-peak method, developed by Brickman, could allow this calibration. It is used for radionuclide calibration that emits at least two photons in coincidence. Therefore, it was developed a methodology that combines gamma spectrometry technique with sum-peak method to standardize {sup 133}Ba samples. Activity results obtained proved compatible, with uncertainties of less than 1%, and, when compared with other methods of calibration, sum-peak demonstrated the feasibility of this methodology, particularly, for simplicity and effectiveness. (author)

  7. Handheld temperature calibrator

    National Research Council Canada - National Science Library

    Martella, Melanie

    2003-01-01

    ... you sign on. What are you waiting for? JOFRA ETC Series dry-block calibrators from AMETEK Test & Calibration Instruments, Largo, FL, are small enough to be handheld and feature easy-to-read displays, multiple bore blocks, programmable test setup, RS-232 communications, and software. Two versions are available: the ETC 125A that ranges from -10[degrees]C to 125[d...

  8. OLI Radiometric Calibration

    Science.gov (United States)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  9. WFPC2 Polarization Calibration

    Science.gov (United States)

    Biretta, J.; McMaster, M.

    1997-12-01

    We derive a detailed calibration for WFPC2 polarization data which is accurate to about 1.5%. We begin by computing polarizer flats, and show how they are applied to data. A physical model for the polarization effects of the WFPC2 optics is then created using Mueller matricies. This model includes corrections for the instrumental polarization (diattenuation and phase retardance) of the pick-off mirror, as well as the high cross-polarization transmission of the polarizer filter. We compare this model against the on-orbit observations of polarization calibrators, and show it predicts relative counts in the different polarizer/aperture settings to 1.5% RMS accuracy. We then show how this model can be used to calibrate GO data, and present two WWW tools which allow observers to easily calibrate their data. Detailed examples are given illustrationg the calibration and display of WFPC2 polarization data. In closing we describe future plans and possible improvements.

  10. Sandia WIPP calibration traceability

    Energy Technology Data Exchange (ETDEWEB)

    Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.

  11. Methodological guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-04-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs.

  12. Muon Calibration at SoLid

    CERN Document Server

    Saunders, Daniel

    2016-01-01

    The SoLid experiment aims to make a measurement of very short distance neutrino oscillations using reactor antineutrinos. Key to its sensitivity are the experiments high spatial and energy resolution, combined with a very suitable reactor source and efficient background rejection. The fine segmentation of the detector (cubes of side 5cm), and ability to resolve signals in space and time, gives SoLid the capability to track cosmic muons. In principle a source of background, these turn into a valuable calibration source if they can be cleanly identified. This work presents the first energy calibration results, using cosmic muons, of the 288kg SoLid prototype SM1. This includes the methodology of tracking at SoLid, cosmic ray angular analyses at the reactor site, estimates of the time resolution, and calibrations at the cube level.

  13. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  14. Site Calibration report

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    This report describes the site calibration carried out at Østerild, during a given period. The site calibration was performed with two Windcube WLS7 (v1) lidars at ten measurements heights. The lidar is not a sensor approved by the current version of the IEC 61400-12-1 [1] and therefore the site...... calibration with lidars does not comply with the standard. However, the measurements are carried out following the guidelines of IEC 61400-12-1 where possible, but with some deviations presented in the following chapters....

  15. Lidar to lidar calibration

    DEFF Research Database (Denmark)

    Fernandez Garcia, Sergio; Villanueva, Héctor

    This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....

  16. Lidar to lidar calibration

    DEFF Research Database (Denmark)

    Georgieva Yankova, Ginka; Courtney, Michael

    This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....

  17. Multifractal methodology

    CERN Document Server

    Salat, Hadrien; Arcaute, Elsa

    2016-01-01

    Various methods have been developed independently to study the multifractality of measures in many different contexts. Although they all convey the same intuitive idea of giving a "dimension" to sets where a quantity scales similarly within a space, they are not necessarily equivalent on a more rigorous level. This review article aims at unifying the multifractal methodology by presenting the multifractal theoretical framework and principal practical methods, namely the moment method, the histogram method, multifractal detrended fluctuation analysis (MDFA) and modulus maxima wavelet transform (MMWT), with a comparative and interpretative eye.

  18. Qmerit-calibrated overlay to improve overlay accuracy and device performance

    Science.gov (United States)

    Ullah, Md Zakir; Jazim, Mohamed Fazly Mohamed; Sim, Stella; Lim, Alan; Hiem, Biow; Chuen, Lieu Chia; Ang, Jesline; Lim, Ek Chow; Klein, Dana; Amit, Eran; Volkovitch, Roie; Tien, David; Choi, DongSub

    2015-03-01

    In advanced semiconductor industries, the overlay error budget is getting tighter due to shrinkage in technology. To fulfill the tighter overlay requirements, gaining every nanometer of improved overlay is very important in order to accelerate yield in high-volume manufacturing (HVM) fabs. To meet the stringent overlay requirements and to overcome other unforeseen situations, it is becoming critical to eliminate the smallest imperfections in the metrology targets used for overlay metrology. For standard cases, the overlay metrology recipe is selected based on total measurement uncertainty (TMU). However, under certain circumstances, inaccuracy due to target imperfections can become the dominant contributor to the metrology uncertainty and cannot be detected and quantified by the standard TMU. For optical-based overlay (OBO) metrology targets, mark asymmetry is a common issue which can cause measurement inaccuracy, and it is not captured by standard TMU. In this paper, a new calibration method, Archer Self-Calibration (ASC), has been established successfully in HVM fabs to improve overlay accuracy on image-based overlay (IBO) metrology targets. Additionally, a new color selection methodology has been developed for the overlay metrology recipe as part of this calibration method. In this study, Qmerit-calibrated data has been used for run-to-run control loop at multiple devices. This study shows that color filter can be chosen more precisely with the help of Qmerit data. Overlay stability improved by 10~20% with best color selection, without causing any negative impact to the products. Residual error, as well as overlay mean plus 3-sigma, showed an improvement of up to 20% when Qmerit-calibrated data was used. A 30% improvement was seen in certain electrical data associated with tested process layers.

  19. Calibration Fixture For Anemometer Probes

    Science.gov (United States)

    Lewis, Charles R.; Nagel, Robert T.

    1993-01-01

    Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.

  20. Computation of methodology-independent single-ion solvation properties from molecular simulations. III. Correction terms for the solvation free energies, enthalpies, entropies, heat capacities, volumes, compressibilities, and expansivities of solvated ions.

    Science.gov (United States)

    Reif, Maria M; Hünenberger, Philippe H

    2011-04-14

    The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions (finite or periodic system, system or box size) and treatment of electrostatic interactions (Coulombic, lattice-sum, or cutoff-based) used during these simulations. However, as shown by Kastenholz and Hünenberger [J. Chem. Phys. 124, 224501 (2006)], correction terms can be derived for the effects of: (A) an incorrect solvent polarization around the ion and an incomplete or/and inexact interaction of the ion with the polarized solvent due to the use of an approximate (not strictly Coulombic) electrostatic scheme; (B) the finite-size or artificial periodicity of the simulated system; (C) an improper summation scheme to evaluate the potential at the ion site, and the possible presence of a polarized air-liquid interface or of a constraint of vanishing average electrostatic potential in the simulated system; and (D) an inaccurate dielectric permittivity of the employed solvent model. Comparison with standard experimental data also requires the inclusion of appropriate cavity-formation and standard-state correction terms. In the present study, this correction scheme is extended by: (i) providing simple approximate analytical expressions (empirically-fitted) for the correction terms that were evaluated numerically in the above scheme (continuum-electrostatics calculations); (ii) providing correction terms for derivative thermodynamic single-ion solvation properties (and corresponding partial molar variables in solution), namely, the enthalpy, entropy, isobaric heat capacity, volume, isothermal compressibility, and isobaric expansivity (including appropriate standard-state correction terms). The ability of the correction scheme to produce methodology-independent single-ion solvation free energies based on atomistic simulations is tested in the case of Na(+) hydration, and the nature and magnitude of the correction terms for

  1. Calibrating nacelle lidars

    DEFF Research Database (Denmark)

    Courtney, Michael

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report...... presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated...... a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam...

  2. SRHA calibration curve

    Data.gov (United States)

    U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...

  3. Air Data Calibration Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This facility is for low altitude subsonic altimeter system calibrations of air vehicles. Mission is a direct support of the AFFTC mission. Postflight data merge is...

  4. SPOTS Calibration Example

    Directory of Open Access Journals (Sweden)

    Patterson E.

    2010-06-01

    Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.

  5. Calibrated Properties Model

    Energy Technology Data Exchange (ETDEWEB)

    C. Ahlers; H. Liu

    2000-03-12

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  6. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig

    2016-05-02

    This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.

  7. Calibrating nacelle lidars

    OpenAIRE

    Courtney, Michael

    2013-01-01

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail.The first of these is a line of sight...

  8. Scanner calibration revisited.

    Science.gov (United States)

    Pozhitkov, Alexander E

    2010-07-01

    Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2.) reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  9. Scanner calibration revisited

    Directory of Open Access Journals (Sweden)

    Pozhitkov Alexander E

    2010-07-01

    Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  10. TWSTFT Link Calibration Report

    Science.gov (United States)

    2015-09-01

    box calibrator with unknown but constant total delay during a calibration tour Total Delay: The total electrical delay from the antenna phase center...to the UTCp including all the devices/cables that the satellite and clock signals pass through. It numerically equals the sum of all the sub-delays...PTB. To average out the dimnal effects and measurement noise , 5-7 days of continuous measurements is required. 3 Setups at the Lab(k) The setup

  11. Approximation Behooves Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel; Poulsen, Rolf

    2013-01-01

    Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....

  12. Energy calibration via correlation

    CERN Document Server

    Maier, Daniel

    2015-01-01

    The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241 Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be le...

  13. Calibrating nacelle lidars

    Energy Technology Data Exchange (ETDEWEB)

    Courtney, M.

    2013-01-15

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)

  14. Research Methodology

    CERN Document Server

    Rajasekar, S; Philomination, P

    2006-01-01

    In this manuscript various components of research are listed and briefly discussed. The topics considered in this write-up cover a part of the research methodology paper of Master of Philosophy (M.Phil.) course and Doctor of Philosophy (Ph.D.) course. The manuscript is intended for students and research scholars of science subjects such as mathematics, physics, chemistry, statistics, biology and computer science. Various stages of research are discussed in detail. Special care has been taken to motivate the young researchers to take up challenging problems. Ten assignment works are given. For the benefit of young researchers a short interview with three eminent scientists is included at the end of the manuscript.

  15. Calibration Adjustments to the MODIS Aqua Ocean Color Bands

    Science.gov (United States)

    Meister, Gerhard

    2012-01-01

    After the end of the SeaWiFS mission in 2010 and the MERIS mission in 2012, the ocean color products of the MODIS on Aqua are the only remaining source to continue the ocean color climate data record until the VIIRS ocean color products become operational (expected for summer 2013). The MODIS on Aqua is well beyond its expected lifetime, and the calibration accuracy of the short wavelengths (412nm and 443nm) has deteriorated in recent years_ Initially, SeaWiFS data were used to improve the MODIS Aqua calibration, but this solution was not applicable after the end of the SeaWiFS mission_ In 2012, a new calibration methodology was applied by the MODIS calibration and support team using desert sites to improve the degradation trending_ This presentation presents further improvements to this new approach. The 2012 reprocessing of the MODIS Aqua ocean color products is based on the new methodology.

  16. Methodological advances

    Directory of Open Access Journals (Sweden)

    Lebreton, J.-D.

    2004-06-01

    Full Text Available The study of population dynamics has long depended on methodological progress. Among many striking examples, continuous time models for populations structured in age (Sharpe & Lotka, 1911 were made possible by progress in the mathematics of integral equations. Therefore the relationship between population ecology and mathematical and statistical modelling in the broad sense raises a challenge in interdisciplinary research. After the impetus given in particular by Seber (1982, the regular biennial EURING conferences became a major vehicle to achieve this goal. It is thus not surprising that EURING 2003 included a session entitled “Methodological advances”. Even if at risk of heterogeneity in the topics covered and of overlap with other sessions, such a session was a logical way of ensuring that recent and exciting new developments were made available for discussion, further development by biometricians and use by population biologists. The topics covered included several to which full sessions were devoted at EURING 2000 (Anderson, 2001 such as: individual covariates, Bayesian methods, and multi–state models. Some other topics (heterogeneity models, exploited populations and integrated modelling had been addressed by contributed talks or posters. Their presence among “methodological advances”, as well as in other sessions of EURING 2003, was intended as a response to their rapid development and potential relevance to biological questions. We briefly review all talks here, including those not published in the proceedings. In the plenary talk, Pradel et al. (in prep. developed GOF tests for multi–state models. Until recently, the only goodness–of–fit procedures for multistate models were ad hoc, and non optimal, involving use of standard tests for single state models (Lebreton & Pradel, 2002. Pradel et al. (2003 proposed a general approach based in particular on mixtures of multinomial distributions. Pradel et al. (in prep. showed

  17. Corrections to the MODIS Aqua Calibration Derived From MODIS Aqua Ocean Color Products

    Science.gov (United States)

    Meister, Gerhard; Franz, Bryan Alden

    2013-01-01

    Ocean color products such as, e.g., chlorophyll-a concentration, can be derived from the top-of-atmosphere radiances measured by imaging sensors on earth-orbiting satellites. There are currently three National Aeronautics and Space Administration sensors in orbit capable of providing ocean color products. One of these sensors is the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, whose ocean color products are currently the most widely used of the three. A recent improvement to the MODIS calibration methodology has used land targets to improve the calibration accuracy. This study evaluates the new calibration methodology and describes further calibration improvements that are built upon the new methodology by including ocean measurements in the form of global temporally averaged water-leaving reflectance measurements. The calibration improvements presented here mainly modify the calibration at the scan edges, taking advantage of the good performance of the land target trending in the center of the scan.

  18. Calibration and application of PUF disk passive air samplers for tracking polycyclic aromatic compounds (PACs)

    Science.gov (United States)

    Harner, Tom; Su, Ky; Genualdi, Susie; Karpowicz, Jessica; Ahrens, Lutz; Mihele, Cristian; Schuster, Jasmin; Charland, Jean-Pierre; Narayan, Julie

    2013-08-01

    Results are reported from a field calibration of the polyurethane foam (PUF) disk passive air sampler for measuring polycyclic aromatic compounds (PACs) in the atmosphere of the Alberta oil sands region of Canada. Passive samplers were co-deployed alongside conventional high volume samplers at three sites. The results demonstrate the ability of the PUF disk sampler to capture PACs, including polycyclic aromatic hydrocarbons (PAHs), alkylated PAHs and parent and alkylated dibenzothiophenes. Both gas- and particle-phase PACs were captured with an average sampling rate of approximately 5 m3 day-1, similar to what has been previously observed for other semivolatile compounds. This is the first application of the PUF disk sampler for alkylated PAHs and dibenzothiophenes in air. The derived sampling rates are combined with estimates of the equilibrium partitioning of the PACs in the PUF disk samplers to estimate effective sample air volumes for all targeted PACs. This information is then applied to the passive sampling results from two deployments across 17 sites in the region to generate spatial maps of PACs. The successful calibration of the sampler and development of the methodology for deriving air concentrations lends support to the application of this cost-effective and simple sampler in longer term studies of PACs in the oil sands region.

  19. Multi-Instrument Inter-Calibration (MIIC System

    Directory of Open Access Journals (Sweden)

    Chris Currey

    2016-11-01

    Full Text Available In order to have confidence in the long-term records of atmospheric and surface properties derived from satellite measurements it is important to know the stability and accuracy of the actual radiance or reflectance measurements. Climate quality measurements require accurate calibration of space-borne instruments. Inter-calibration is the process that ties the calibration of a target instrument to a more accurate, preferably SI-traceable, reference instrument by matching measurements in time, space, wavelength, and view angles. A major challenge for any inter-calibration study is to find and acquire matched samples from within the large data volumes distributed across Earth science data centers. Typically less than 0.1% of the instrument data are required for inter-calibration analysis. Software tools and networking middleware are necessary for intelligent selection and retrieval of matched samples from multiple instruments on separate spacecraft.  This paper discusses the Multi-Instrument Inter-Calibration (MIIC system, a web-based software framework used by the Climate Absolute Radiance and Refractivity Observatory (CLARREO Pathfinder mission to simplify the data management mechanics of inter-calibration. MIIC provides three main services: (1 inter-calibration event prediction; (2 data acquisition; and (3 data analysis. The combination of event prediction and powerful server-side functions reduces the data volume required for inter-calibration studies by several orders of magnitude, dramatically reducing network bandwidth and disk storage needs. MIIC provides generic retrospective analysis services capable of sifting through large data volumes of existing instrument data. The MIIC tiered design deployed at large institutional data centers can help international organizations, such as Global Space Based Inter-Calibration System (GSICS, more efficiently acquire matched data from multiple data centers. In this paper we describe the MIIC

  20. HAWC Timing Calibration

    CERN Document Server

    Huentemeyer, Petra; Dingus, Brenda

    2009-01-01

    The High-Altitude Water Cherenkov (HAWC) Experiment is a second-generation highsensitivity gamma-ray and cosmic-ray detector that builds on the experience and technology of the Milagro observatory. Like Milagro, HAWC utilizes the water Cherenkov technique to measure extensive air showers. Instead of a pond filled with water (as in Milagro) an array of closely packed water tanks is used. The event direction will be reconstructed using the times when the PMTs in each tank are triggered. Therefore, the timing calibration will be crucial for reaching an angular resolution as low as 0.25 degrees.We propose to use a laser calibration system, patterned after the calibration system in Milagro. Like Milagro, the HAWC optical calibration system will use ~1 ns laser light pulses. Unlike Milagro, the PMTs are optically isolated and require their own optical fiber calibration. For HAWC the laser light pulses will be directed through a series of optical fan-outs and fibers to illuminate the PMTs in approximately one half o...

  1. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  2. Generic calibration procedures for nacelle-based profiling lidars

    DEFF Research Database (Denmark)

    Borraccino, Antoine; Courtney, Michael; Wagner, Rozenn

    In power performance testing, it has been demonstrated that the effects of wind speed and direction variations over the rotor disk can no longer be neglected for large wind turbines [1]. A new generation of commercial nacelle-based lidars is now available, offering wind profiling capabilities...... to calibrate profiling nacelle lidars........ Developing standard procedures for power curves using lidars requires assessing lidars measurement uncertainty that is provided by a calibration. Based on the calibration results from two lidars, the Avent 5-beam Demonstrator and the Zephir Dual Mode (ZDM), we present in this paper a generic methodology...

  3. Calibration Systems Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Myers, Tanya L.; Broocks, Bryan T.; Phillips, Mark C.

    2006-02-01

    The Calibration Systems project at Pacific Northwest National Laboratory (PNNL) is aimed towards developing and demonstrating compact Quantum Cascade (QC) laser-based calibration systems for infrared imaging systems. These on-board systems will improve the calibration technology for passive sensors, which enable stand-off detection for the proliferation or use of weapons of mass destruction, by replacing on-board blackbodies with QC laser-based systems. This alternative technology can minimize the impact on instrument size and weight while improving the quality of instruments for a variety of missions. The potential of replacing flight blackbodies is made feasible by the high output, stability, and repeatability of the QC laser spectral radiance.

  4. Ibis ground calibration

    Energy Technology Data Exchange (ETDEWEB)

    Bird, A.J.; Barlow, E.J.; Tikkanen, T. [Southampton Univ., School of Physics and Astronomy (United Kingdom); Bazzano, A.; Del Santo, M.; Ubertini, P. [Istituto di Astrofisica Spaziale e Fisica Cosmica - IASF/CNR, Roma (Italy); Blondel, C.; Laurent, P.; Lebrun, F. [CEA Saclay - Sap, 91 - Gif sur Yvette (France); Di Cocco, G.; Malaguti, E. [Istituto di Astrofisica Spaziale e Fisica-Bologna - IASF/CNR (Italy); Gabriele, M.; La Rosa, G.; Segreto, A. [Istituto di Astrofisica Spaziale e Fisica- IASF/CNR, Palermo (Italy); Quadrini, E. [Istituto di Astrofisica Spaziale e Fisica-Cosmica, EASF/CNR, Milano (Italy); Volkmer, R. [Institut fur Astronomie und Astrophysik, Tubingen (Germany)

    2003-11-01

    We present an overview of results obtained from IBIS ground calibrations. The spectral and spatial characteristics of the detector planes and surrounding passive materials have been determined through a series of calibration campaigns. Measurements of pixel gain, energy resolution, detection uniformity, efficiency and imaging capability are presented. The key results obtained from the ground calibration have been: - optimization of the instrument tunable parameters, - determination of energy linearity for all detection modes, - determination of energy resolution as a function of energy through the range 20 keV - 3 MeV, - demonstration of imaging capability in each mode, - measurement of intrinsic detector non-uniformity and understanding of the effects of passive materials surrounding the detector plane, and - discovery (and closure) of various leakage paths through the passive shielding system.

  5. Calibrating Legal Judgments

    Directory of Open Access Journals (Sweden)

    Frederick Schauer

    2017-09-01

    Full Text Available Objective to study the notion and essence of legal judgments calibration the possibilities of using it in the lawenforcement activity to explore the expenses and advantages of using it. Methods dialectic approach to the cognition of social phenomena which enables to analyze them in historical development and functioning in the context of the integrity of objective and subjective factors it determined the choice of the following research methods formallegal comparative legal sociological methods of cognitive psychology and philosophy. Results In ordinary life people who assess other peoplersaquos judgments typically take into account the other judgments of those they are assessing in order to calibrate the judgment presently being assessed. The restaurant and hotel rating website TripAdvisor is exemplary because it facilitates calibration by providing access to a raterrsaquos previous ratings. Such information allows a user to see whether a particular rating comes from a rater who is enthusiastic about every place she patronizes or instead from someone who is incessantly hard to please. And even when less systematized as in assessing a letter of recommendation or college transcript calibration by recourse to the decisional history of those whose judgments are being assessed is ubiquitous. Yet despite the ubiquity and utility of such calibration the legal system seems perversely to reject it. Appellate courts do not openly adjust their standard of review based on the previous judgments of the judge whose decision they are reviewing nor do judges in reviewing legislative or administrative decisions magistrates in evaluating search warrant representations or jurors in assessing witness perception. In most legal domains calibration by reference to the prior decisions of the reviewee is invisible either because it does not exist or because reviewing bodies are unwilling to admit using what they in fact know and employ. Scientific novelty for the first

  6. Iterative Magnetometer Calibration

    Science.gov (United States)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  7. New in-situ, non-intrusive calibration

    Science.gov (United States)

    Zunino, Heather; Adrian, Ronald; Ding, Liuyang; Prestridge, Kathy

    2014-11-01

    Tomographic particle image velocimetry (PIV) experiments require precise and accurate camera calibration. Standard techniques make assumptions about hard-to-measure camera parameters (i.e. optical axis angle, distortions, etc.)-reducing the calibration accuracy. Additionally, vibrations and slight movements after calibration may cause significant errors-particularly for tomographic PIV. These problems are exacerbated when a calibration target cannot be placed within the test section. A new PIV camera calibration method has been developed to permit precise calibration without placing a calibration target inside the test section or scanning the target over a volume. The method is capable of correcting for dynamic calibration changes occurring between PIV laser pulses. A transparent calibration plate with fine marks on both sides is positioned on the test section window. Dual-plane mapping makes it possible to determine a mapping function containing both position and angular direction of central rays from particles. From this information, central rays can be traced into the test section with high accuracy. Image distortion by the lens and refraction at various air-glass-liquid interfaces are accounted for, and no information about the position or angle of the camera(s) is required.

  8. Technical Report Series on Global Modeling and Data Assimilation. Volume 42; Soil Moisture Active Passive (SMAP) Project Calibration and Validation for the L4_C Beta-Release Data Product

    Science.gov (United States)

    Koster, Randal D. (Editor); Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima (Editor); Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas

    2015-01-01

    During the post-launch Cal/Val Phase of SMAP there are two objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements according to the Cal/Val timeline. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product specifically for the beta release. The beta-release version of the SMAP L4_C algorithms utilizes a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily NEE and component carbon fluxes, particularly vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (<10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape FT controls on GPP and Reco (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying freeze/thaw and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems.

  9. Control volume based hydrocephalus research; a phantom study

    Science.gov (United States)

    Cohen, Benjamin; Voorhees, Abram; Madsen, Joseph; Wei, Timothy

    2009-11-01

    Hydrocephalus is a complex spectrum of neurophysiological disorders involving perturbation of the intracranial contents; primarily increased intraventricular cerebrospinal fluid (CSF) volume and intracranial pressure are observed. CSF dynamics are highly coupled to the cerebral blood flows and pressures as well as the mechanical properties of the brain. Hydrocephalus, as such, is a very complex biological problem. We propose integral control volume analysis as a method of tracking these important interactions using mass and momentum conservation principles. As a first step in applying this methodology in humans, an in vitro phantom is used as a simplified model of the intracranial space. The phantom's design consists of a rigid container filled with a compressible gel. Within the gel a hollow spherical cavity represents the ventricular system and a cylindrical passage represents the spinal canal. A computer controlled piston pump supplies sinusoidal volume fluctuations into and out of the flow phantom. MRI is used to measure fluid velocity and volume change as functions of time. Independent pressure measurements and momentum flow rate measurements are used to calibrate the MRI data. These data are used as a framework for future work with live patients and normal individuals. Flow and pressure measurements on the flow phantom will be presented through the control volume framework.

  10. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Reda, Ibrahim; Robinson, Justin

    2016-07-01

    Accurate solar radiation data sets are critical to reducing the expenses associated with mitigating performance risk for solar energy conversion systems, and they help utility planners and grid system operators understand the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of calibration methodologies and the resulting calibration responsivities provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these radiometers are calibrated indoors, and some are calibrated outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The reference radiometer calibrations are traceable to the World Radiometric Reference. These different methods of calibration demonstrated 1% to 2% differences in solar irradiance measurement. Analyzing these values will ultimately assist in determining the uncertainties of the radiometer data and will assist in developing consensus on a standard for calibration.

  11. Calibration of line-scan cameras for precision measurement.

    Science.gov (United States)

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Niu, Zhiyuan

    2016-09-01

    Calibration of line-scan cameras for precision measurement should have large calibration volume and be flexible in the actual measurement field. In this paper, we present a high-precision calibration method. Instead of using a large 3D pattern, we use a small planar pattern and a precalibrated matrix camera to obtain plenty of points with a suitable distribution, which would ensure the precision of the calibration results. The matrix camera removes the necessity of precise adjustment and movement and links the line-scan camera to the world easily, both of which enhance flexibility in the measurement field. The method has been verified by experiments. The experimental results demonstrated that the proposed method gives a practical solution to calibrate line scan cameras for precision measurement.

  12. Smart Calibration of Excavators

    DEFF Research Database (Denmark)

    Bro, Marie; Døring, Kasper; Ellekilde, Lars-Peter

    2005-01-01

    Excavators dig holes. But where is the bucket? The purpose of this report is to treat four different problems concerning calibrations of position indicators for excavators in operation at concrete construction sites. All four problems are related to the question of how to determine the precise ge...

  13. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    is suggested to cope with the singular design matrix most often seen in chemometric calibration. Furthermore, the proposed algorithm may be generalized to all convex norms like Sigma/beta (j)/(gamma) where gamma greater than or equal to 1, i.e. a method that continuously varies from ridge regression...

  14. Calibrating Communication Competencies

    Science.gov (United States)

    Surges Tatum, Donna

    2016-11-01

    The Many-faceted Rasch measurement model is used in the creation of a diagnostic instrument by which communication competencies can be calibrated, the severity of observers/raters can be determined, the ability of speakers measured, and comparisons made between various groups.

  15. NVLAP calibration laboratory program

    Energy Technology Data Exchange (ETDEWEB)

    Cigler, J.L.

    1993-12-31

    This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).

  16. CALIBRATION OF PHOSWICH DETECTORS

    NARCIS (Netherlands)

    LEEGTE, HKW; KOLDENHOF, EE; BOONSTRA, AL; WILSCHUT, HW

    1992-01-01

    Two important aspects for the calibration of phoswich detector arrays have been investigated. It is shown that common gate ADCs can be used: The loss in particle identification due to fluctuations in the gate timing in multi-hit events can be corrected for by a simple procedure using the measured ti

  17. Measurement System & Calibration report

    DEFF Research Database (Denmark)

    Kock, Carsten Weber; Vesth, Allan

    This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...

  18. Entropic calibration revisited

    Energy Technology Data Exchange (ETDEWEB)

    Brody, Dorje C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)]. E-mail: d.brody@imperial.ac.uk; Buckley, Ian R.C. [Centre for Quantitative Finance, Imperial College, London SW7 2AZ (United Kingdom); Constantinou, Irene C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom); Meister, Bernhard K. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)

    2005-04-11

    The entropic calibration of the risk-neutral density function is effective in recovering the strike dependence of options, but encounters difficulties in determining the relevant greeks. By use of put-call reversal we apply the entropic method to the time reversed economy, which allows us to obtain the spot price dependence of options and the relevant greeks.

  19. The External Calibrator for Hydrogen Observatories

    CERN Document Server

    Jacobs, Daniel C; Bowman, Judd; Neben, Abraham R; Stinnett, Benjamin; Turner, Lauren

    2016-01-01

    Multiple instruments are pursuing constraints on dark energy, observing reionization and opening a window on the dark ages through the detection and characterization of the 21cm hydrogen line across the redshift spectrum, from nearby to z=25. These instruments, including CHIME in the sub-meter and HERA in the meter bands, are wide-field arrays with multiple-degree beams, typically operating in transit mode. Accurate knowledge of their primary beams is critical for separation of bright foregrounds from the desired cosmological signals, but difficult to achieve through astronomical observations alone. Previous beam calibration work has focused on model verification and does not address the need of 21cm experiments for routine beam mapping, to the horizon, of the as-built array. We describe the design and methodology of a drone-mounted calibrator, the External Calibrator for Hydrogen Observatories (ECHO), that aims to address this need. We report on a first set of trials to calibrate low-frequency dipoles and co...

  20. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  1. Marine X-band Weather Radar Data Calibration

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2012-01-01

    Application of weather radar data in urban hydrology is evolving and radar data is now applied for both modelling, analysis, and real time control purposes. In these contexts, it is allimportant that the radar data is well calibrated and adjusted in order to obtain valid quantitative precipitation...... estimates. This paper presents some of the challenges in small marine X-band radar calibration by comparing three calibration procedures for assessing the relationship between radar and rain gauge data. Validation shows similar results for precipitation volumes but more diverse results on peak rain...

  2. Marine X-band Weather Radar Data Calibration

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2012-01-01

    Application of weather radar data in urban hydrology is evolving and radar data is now applied for both modelling, analysis, and real time control purposes. In these contexts, it is allimportant that the radar data is well calibrated and adjusted in order to obtain valid quantitative precipitation...... estimates. This paper presents some of the challenges in small marine X-band radar calibration by comparing three calibration procedures for assessing the relationship between radar and rain gauge data. Validation shows similar results for precipitation volumes but more diverse results on peak rain...

  3. Spatial distributions and transient effects in molecular fluxes for the calibration of thermal desorption spectrometers

    Science.gov (United States)

    Jackson, Robert Howard

    The measurement of molecular fluxes with a differentially pumped mass spectrometer requires a method by which the mass spectrometer signal can be calibrated against a known molecular flux. This work presents a methodology for producing known molecular fluxes using a flow calibrated effusion source in conjunction with a glass capillary array molecular doser. The effusion source is comprised of a calibrated volume, a spinning rotor pressure gauge, and a positive shutoff capillary leak valve. The efficiency of calculating the spatial flux distribution from the doser was improved by reparameterization of the formalism of Winkler and Yates. This formalism was extended to arbitrarily shaped planar targets using Fourier convolution techniques. The flux distribution from the doser was measured close to the array at 0.635mm and at 1 cm distance. Using the effective diameter of the array as the only fitting parameter for the close data, the model correctly predicts the flux distribution at 1 cm. The loss of molecular flow due to sticking in the doser affects the calibration and, therefore was estimated from the transient response through the doser using a 1-dimensional diffusion model of the transient flow from the capillary valve through the doser. Three cases are considered: a long narrow tube without sticking, with sticking, and with a restricted exit. In each case, the partial differential equation for 1-D diffusion was solved for a step change in the inlet flow with a well-pumped exit. The transient exit flow measured for argon gas fits the non-sticking model accurately. The oxygen response included a non-sticking transient, consistent with the mass adjusted argon transient, and a slowly saturating exponential function. This slow function is not consistent with sticking in the doser tube but may be due to changes in the mass spectrometer multiplier or some other adsorption loss. Finally, the above methodology was used to calibrate the mass spectrometer for oxygen and

  4. Mercury Calibration System

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster

    2009-03-11

    U.S. Environmental Protection Agency (EPA) Performance Specification 12 in the Clean Air Mercury Rule (CAMR) states that a mercury CEM must be calibrated with National Institute for Standards and Technology (NIST)-traceable standards. In early 2009, a NIST traceable standard for elemental mercury CEM calibration still does not exist. Despite the vacature of CAMR by a Federal appeals court in early 2008, a NIST traceable standard is still needed for whatever regulation is implemented in the future. Thermo Fisher is a major vendor providing complete integrated mercury continuous emissions monitoring (CEM) systems to the industry. WRI is participating with EPA, EPRI, NIST, and Thermo Fisher towards the development of the criteria that will be used in the traceability protocols to be issued by EPA. An initial draft of an elemental mercury calibration traceability protocol was distributed for comment to the participating research groups and vendors on a limited basis in early May 2007. In August 2007, EPA issued an interim traceability protocol for elemental mercury calibrators. Various working drafts of the new interim traceability protocols were distributed in late 2008 and early 2009 to participants in the Mercury Standards Working Committee project. The protocols include sections on qualification and certification. The qualification section describes in general terms tests that must be conducted by the calibrator vendors to demonstrate that their calibration equipment meets the minimum requirements to be established by EPA for use in CAMR monitoring. Variables to be examined include linearity, ambient temperature, back pressure, ambient pressure, line voltage, and effects of shipping. None of the procedures were described in detail in the draft interim documents; however they describe what EPA would like to eventually develop. WRI is providing the data and results to EPA for use in developing revised experimental procedures and realistic acceptance criteria based on

  5. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  6. The Calibration Reference Data System

    Science.gov (United States)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  7. Lidar calibration experiments

    DEFF Research Database (Denmark)

    Ejsing Jørgensen, Hans; Mikkelsen, T.; Streicher, J.

    1997-01-01

    A series of atmospheric aerosol diffusion experiments combined with lidar detection was conducted to evaluate and calibrate an existing retrieval algorithm for aerosol backscatter lidar systems. The calibration experiments made use of two (almost) identical mini-lidar systems for aerosol cloud...... detection to test the reproducibility and uncertainty of lidars. Lidar data were obtained from both single-ended and double-ended Lidar configurations. A backstop was introduced in one of the experiments and a new method was developed where information obtained from the backstop can be used in the inversion...... algorithm. Independent in-situ aerosol plume concentrations were obtained from a simultaneous tracer gas experiment with SF6, and comparisons with the two lidars were made. The study shows that the reproducibility of the lidars is within 15%, including measurements from both sides of a plume...

  8. HIRDLS monochromator calibration equipment

    Science.gov (United States)

    Hepplewhite, Christopher L.; Barnett, John J.; Djotni, Karim; Whitney, John G.; Bracken, Justain N.; Wolfenden, Roger; Row, Frederick; Palmer, Christopher W. P.; Watkins, Robert E. J.; Knight, Rodney J.; Gray, Peter F.; Hammond, Geoffory

    2003-11-01

    A specially designed and built monochromator was developed for the spectral calibration of the HIRDLS instrument. The High Resolution Dynamics Limb Sounder (HIRDLS) is a precision infra-red remote sensing instrument with very tight requirements on the knowledge of the response to received radiation. A high performance, vacuum compatible monochromator, was developed with a wavelength range from 4 to 20 microns to encompass that of the HIRDLS instrument. The monochromator is integrated into a collimating system which is shared with a set of tiny broad band sources used for independent spatial response measurements (reported elsewhere). This paper describes the design and implementation of the monochromator and the performance obtained during the period of calibration of the HIRDLS instrument at Oxford University in 2002.

  9. Optical tweezers absolute calibration

    CERN Document Server

    Dutra, R S; Neto, P A Maia; Nussenzveig, H M

    2014-01-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past fifteen years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spo...

  10. Calibration Facilities for NIF

    Energy Technology Data Exchange (ETDEWEB)

    Perry, T.S.

    2000-06-15

    The calibration facilities will be dynamic and will change to meet the needs of experiments. Small sources, such as the Manson Source should be available to everyone at any time. Carrying out experiments at Omega is providing ample opportunity for practice in pre-shot preparation. Hopefully, the needs that are demonstrated in these experiments will assure the development of (or keep in service) facilities at each of the laboratories that will be essential for in-house preparation for experiments at NIF.

  11. Mesoscale hybrid calibration artifact

    Science.gov (United States)

    Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.

    2010-09-07

    A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.

  12. Astrid-2 SSC ASUMagnetic Calibration

    DEFF Research Database (Denmark)

    Primdahl, Fritz

    1997-01-01

    Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory.......Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory....

  13. Calibration of Underwater Sound Transducers

    Directory of Open Access Journals (Sweden)

    H.R.S. Sastry

    1983-07-01

    Full Text Available The techniques of calibration of underwater sound transducers for farfield, near-field and closed environment conditions are reviewed in this paper .The design of acoustic calibration tank is mentioned. The facilities available at Naval Physical & Oceanographic Laboratory, Cochin for calibration of transducers are also listed.

  14. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities; Metodos de calibracao e de intercomparacao de calibradores de dose utilizados em servicos de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Alessandro Martins da

    1999-07-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  15. ENHANCING SEISMIC CALIBRATION RESEARCH THROUGH SOFTWARE AUTOMATION AND SCIENTIFIC INFORMATION MANAGEMENT

    Energy Technology Data Exchange (ETDEWEB)

    Ruppert, S D; Dodge, D A; Ganzberger, M D; Hauk, T F; Matzel, E M

    2007-07-06

    The National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Program at LLNL has made significant progress enhancing the process of deriving seismic calibrations and performing scientific integration, analysis, and information management with software automation tools. Several achievements in schema design, data visualization, synthesis, and analysis were completed this year. Our tool efforts address the problematic issues of very large datasets and varied formats encountered during seismic calibration research. As data volumes have increased, scientific information management issues such as data quality assessment, ontology mapping, and metadata collection that are essential for production and validation of derived calibrations have negatively impacted researchers abilities to produce products. New information management and analysis tools have resulted in demonstrated gains in efficiency of producing scientific data products and improved accuracy of derived seismic calibrations. Significant software engineering and development efforts have produced an object-oriented framework that provides database centric coordination between scientific tools, users, and data. Nearly a half billion parameters, signals, measurements, and metadata entries are all stored in a relational database accessed by an extensive object-oriented multi-technology software framework that includes elements of stored procedures, real-time transactional database triggers and constraints, as well as coupled Java and C++ software libraries to handle the information interchange and validation requirements. Significant resources were applied to schema design to enable recording of processing flow and metadata. A core capability is the ability to rapidly select and present subsets of related signals and measurements to the researchers for analysis and distillation both visually (JAVA GUI client applications) and in batch mode

  16. Intercomparison of calibration procedures of high dose rate {sup 192} Ir sources in Brazil and a proposal of a new methodology; Intercomparacao de procedimientos de calibracao de fontes de {sup 192} Ir de alta taxa de dose no Brasil e proposta de uma nova metodologia

    Energy Technology Data Exchange (ETDEWEB)

    Marechal, M.H.; Almeida, C.E. de [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes IRD/CNEN. Caixa Postal 37750 CEP 22780-160 Rio de Janeiro (Brazil)

    1998-12-31

    The objective of this paper is to report the results of an intercomparison of the calibration procedures for {sup 192} Ir sources presently in use in Brazil and to proposal a calibration procedure to derive the N{sub k} for a Farmer type ionization chamber for {sup 192} Ir energy by interpolating from a {sup 60} Co gamma-rays and 250 kV x-rays calibration factors. the intercomparison results were all within {+-} 3.0 % except one case where 4.6 % was observed and latter identified as a problem with N-k value for X-rays. The method proposed by the present work make possible the improvement of the metrological coherence among the calibration laboratories and their users once the N{sub k} values could then provided by any of the members of SSDL network. (Author)

  17. Internet-based calibration of a multifunction calibrator

    Energy Technology Data Exchange (ETDEWEB)

    BUNTING BACA,LISA A.; DUDA JR.,LEONARD E.; WALKER,RUSSELL M.; OLDHAM,NILE; PARKER,MARK

    2000-04-17

    A new way of providing calibration services is evolving which employs the Internet to expand present capabilities and make the calibration process more interactive. Sandia National Laboratories and the National Institute of Standards and Technology are collaborating to set up and demonstrate a remote calibration of multifunction calibrators using this Internet-based technique that is becoming known as e-calibration. This paper describes the measurement philosophy and the Internet resources that can provide real-time audio/video/data exchange, consultation and training, as well as web-accessible test procedures, software and calibration reports. The communication system utilizes commercial hardware and software that should be easy to integrate into most calibration laboratories.

  18. Monomer consumption in MAGIC-type polymer gels in the Bragg-peak of proton beams observed by volume selective 1H MR-spectroscopy (MRS): proof of principle for high resolution MRS-methodology with a sensitive rf-detector

    Science.gov (United States)

    Schmid, A. I.; Laistler, E.; Sieg, J.; Dymerska, B.; Wieland, M.; Naumann, J.; Jaekel, O.; Berg, A.

    2013-06-01

    Mono-energetic proton and heavy ion beams for tumour therapy feature high dose gradients laterally and at its penetration depth, characterized by the Bragg-peak. The 3-dimensional dosimetry of such Hadron particle beams poses high demands on the spatial resolution of the imaging methodology and linearity of the polymer gel dose response in a wide dose range and at high linear energy transfer (LET). In almost all polymer gels the Bragg-peak dose response is therefore quenched. Volume selective MR-spectroscopy is in principle capable of delivering information on the polymerization process. We here present the MR-methodology to obtain MR-spectroscopic (MRS) data on the monomer consumption at the very small voxel volumes necessary for resolving e.g. the Bragg-peak area. Using additional hardware components, i.e. a strong gradient system and a very sensitive rf-detector at a high field human 7T scanner, MR-microimaging and MRS with 600 μm depth resolution can be implemented at very short measurement time. The vinyl groups of methacrylic acid in a MAGIC-type polymer gel can be resolved by volume selective MRS. The complete monomer consumption in the Bragg-peak due to polymerization is demonstrated selectively in the Bragg-peak indicating one main reason for Bragg-peak quenching in the investigated polymer gel.

  19. Natural Computing in Computational Finance Volume 4

    CERN Document Server

    O’Neill, Michael; Maringer, Dietmar

    2012-01-01

    This book follows on from Natural Computing in Computational Finance  Volumes I, II and III.   As in the previous volumes of this series, the  book consists of a series of  chapters each of  which was selected following a rigorous, peer-reviewed, selection process.  The chapters illustrate the application of a range of cutting-edge natural  computing and agent-based methodologies in computational finance and economics.  The applications explored include  option model calibration, financial trend reversal detection, enhanced indexation, algorithmic trading,  corporate payout determination and agent-based modeling of liquidity costs, and trade strategy adaptation.  While describing cutting edge applications, the chapters are  written so that they are accessible to a wide audience. Hence, they should be of interest  to academics, students and practitioners in the fields of computational finance and  economics.  

  20. An Aromatic Inventory of the Local Volume

    CERN Document Server

    Marble, A R; van Zee, L; Dale, D A; Smith, J D T; Gordon, K D; Wu, Y; Lee, J C; Kennicutt, R C; Skillman, E D; Johnson, L C; Block, M; Calzetti, D; Cohen, S A; Lee, H; Schuster, M D

    2010-01-01

    Using infrared photometry from the Spitzer Space Telescope, we perform the first inventory of aromatic feature emission (AFE, but also commonly referred to as PAH emission) for a statistically complete sample of star-forming galaxies in the local volume. The photometric methodology involved is calibrated and demonstrated to recover the aromatic fraction of the IRAC 8 micron flux with a standard deviation of 6% for a training set of 40 SINGS galaxies (ranging from stellar to dust dominated) with both suitable mid-infrared Spitzer IRS spectra and equivalent photometry. A potential factor of two improvement could be realized with suitable 5.5 and 10 micron photometry, such as what may be provided in the future by JWST. The resulting technique is then applied to mid-infrared photometry for the 258 galaxies from the Local Volume Legacy (LVL) survey, a large sample dominated in number by low-luminosity dwarf galaxies for which obtaining comparable mid-infrared spectroscopy is not feasible. We find the total LVL lum...

  1. Measurement of rill erosion through a new UAV-GIS methodology

    Directory of Open Access Journals (Sweden)

    Paolo Bazzoffi

    2015-11-01

    Full Text Available Photogrammetry from aerial pictures acquired through micro Unmanned Aerial Vehicles (UAV, integrated by post-processing is a promising methodology both in terms of speed of data acquisition, degree of automation of data processing and cost-effectiveness. The new UAV-GIS methodology has been developed for three main purposes: i for a quick measurement of rill erosion at a field scale with the aim of combining the simplicity of field survey to reliability of results, at an affordable price; ii to calibrate the RUSLE model to make it suitable for the purposes of the CAP common indicator; iii to provide an easy evaluation tool to Regions and to non-research professionals who use the very popular ESRI ArcGis software for assessing the effectiveness of soil conservation measures adopted under CAP and to calibrate the common indicator “soil erosion by water”. High-resolution stereo photos pairs, acquired close to the soil, are of crucial importance in order to produce high resolution DEMs to be analysed under GIS. The GIS methodology consists of the measurement of rill erosion that occurred in a plot from the total volume of the incisions, regardless of internal sediment redeposition, based on Plan Curvature analysis and Focal Statistics analysis, described in detail, as they are the essential constituents of the new methodology. To determine the effectiveness and reliability of the new methodology a comparison between rill depth measured manually on field of 51 rill points and depth measured by UAV-GIS methodology was done. The best calibration equation was obtained by using 30 cm radius in the Focal statistics analysis. The linear regression equation resulted highly significant with R2 =0.87. Two case studies are presented, solved step by step, in order to help the user to overcome possible difficulties of interpretation in the application of the GIS procedure. The first solved exercise concerns a heavily eroded plot where only one DEM, derived

  2. Quality of the neutron probe calibration curve; Qualidade da curva de calibracao da sonda de neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Libardi, Paulo Leonel; Moraes, Sergio Oliveira [Sao Paulo Univ., Piracicaba, SP (Brazil). Escola Superior de Agricultura Luiz de Queiroz. Dept. de Fisica e Meteorologia. E-mail: pllibardi@mandi.esalq.usp.br; somoraes@mandi.esalq.usp.br

    1997-07-01

    An experiment of neutron probe calibration has been performed, involving various volume size samples and collected at various distances from the access tubes. The experiment aimed to give some answers to questions such as suitable sample physical volume, always use of the same volume and sample distance from the neutron probe access tube.

  3. Calibration of well-type ionization chambers; Calibracao de camaras de ionizacao do tipo poco

    Energy Technology Data Exchange (ETDEWEB)

    Alves, C.F.E.; Leite, S.P.; Pires, E.J.; Magalhaes, L.A.G.; David, M.G.; Almeida, C.E. de, E-mail: cfealves@gmail.com [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Lab. de Ciencias Radiologicas; Di Prinzio, R. [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2015-07-01

    This paper presents the methodology developed by the Laboratorio de Ciencias Radiologicas and presently in use for determining of the calibration coefficient for well-type chambers used in the dosimetry of {sup 192}Ir high dose rate sources. Uncertainty analysis involving the calibration procedure are discussed. (author)

  4. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Joseph Rovani; Mark Sanderson

    2008-02-29

    Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The

  5. Summary of KOMPSAT-5 Calibration and Validation

    Science.gov (United States)

    Yang, D.; Jeong, H.; Lee, S.; Kim, B.

    2013-12-01

    including pointing, relative and absolute calibration as well as geolocation accuracy determination. The absolute calibration will be accomplished by determining absolute radiometric accuracy using already deployed trihedral corner reflectors on calibration and validation sites located southeast from Ulaanbaatar, Mongolia. To establish a measure for the assess the final image products, geolocation accuracies of image products with different imaging modes will be determined by using deployed point targets and available Digital Terrain Model (DTM), and on different image processing levels. In summary, this paper will present calibration and validation activities performed during the LEOP and IOT of KOMPSAT-5. The methodology and procedure of calibration and validation will be explained as well as its results. Based on the results, the applications of SAR image products on geophysical processes will be also discussed.

  6. Self-Calibrating Pressure Transducer

    Science.gov (United States)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  7. CALIBRATED HYDRODYNAMIC MODEL

    Directory of Open Access Journals (Sweden)

    Sezar Gülbaz

    2015-01-01

    Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.

  8. Dynamic Torque Calibration Unit

    Science.gov (United States)

    Agronin, Michael L.; Marchetto, Carl A.

    1989-01-01

    Proposed dynamic torque calibration unit (DTCU) measures torque in rotary actuator components such as motors, bearings, gear trains, and flex couplings. Unique because designed specifically for testing components under low rates. Measures torque in device under test during controlled steady rotation or oscillation. Rotor oriented vertically, supported by upper angular-contact bearing and lower radial-contact bearing that floats axially to prevent thermal expansion from loading bearings. High-load capacity air bearing available to replace ball bearings when higher load capacity or reduction in rate noise required.

  9. ALTEA: The instrument calibration

    Energy Technology Data Exchange (ETDEWEB)

    Zaconte, V. [INFN and University of Rome Tor Vergata, Department of Physics, Via della Ricerca Scientifica 1, 00133 Rome (Italy)], E-mail: livio.narici@roma2.infn.it; Belli, F.; Bidoli, V.; Casolino, M.; Di Fino, L.; Narici, L.; Picozza, P.; Rinaldi, A. [INFN and University of Rome Tor Vergata, Department of Physics, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Sannita, W.G. [DISM, University of Genova, Genova (Italy); Department of Psychiatry, SUNY, Stoony Brook, NY (United States); Finetti, N.; Nurzia, G.; Rantucci, E.; Scrimaglio, R.; Segreto, E. [Department of Physics, University and INFN, L' Aquila (Italy); Schardt, D. [GSI/Biophysik, Darmstadt (Germany)

    2008-05-15

    The ALTEA program is an international and multi-disciplinary project aimed at studying particle radiation in space environment and its effects on astronauts' brain functions, as the anomalous perception of light flashes first reported during Apollo missions. The ALTEA space facility includes a 6-silicon telescopes particle detector, and is onboard the International Space Station (ISS) since July 2006. In this paper, the detector calibration at the heavy-ion synchrotron SIS18 at GSI Darmstadt will be presented and compared to the Geant 3 Monte Carlo simulation. Finally, the results of a neural network analysis that was used for ion discrimination on fragmentation data will also be presented.

  10. Calibration of Super-Kamiokande Using an Electron Linac

    CERN Document Server

    Fukuda, Y; Ichihara, E; Inoue, K; Ishihara, K; Ishino, H; Itow, Y; Kajita, T; Kameda, J; Kasuga, S; Kobayashi, K; Kobayashi, Y; Koshio, Y; Martens, K; Miura, M; Nakayama, S; Okada, A; Okumura, K; Sakurai, N; Shiozawa, M; Suzuki, Y; Takeuchi, Y; Totsuka, Y; Yamada, S; Earl, M; Habig, A; Kearns, E; Messier, M D; Scholberg, K; Stone, J L; Sulak, L R; Walter, C W; Goldhaber, M; Barszczak, T; Casper, D; Gajewski, W; Halverson, P G; Hsu, J; Kropp, W R; Price, L R; Reines, F; Smy, M B; Sobel, H W; Vagins, M R; Ganezer, K S; Keig, W E; Ellsworth, R W; Tasaka, S; Flanagan, J W; Kibayashi, A; Learned, J G; Matsuno, S; Stenger, V J; Takemori, D; Ishii, T; Kanzaki, J; Kobayashi, T; Mine, S; Nakamura, K; Nishikawa, K; Oyama, Y; Sakai, A; Sakuda, M; Sasaki, O; Echigo, S; Kohama, M; Suzuki, A T; Haines, T J; Blaufuss, E; Kim, B K; Sanford, R; Svoboda, R; Chen, M L; Conner, Z; Goodman, J A; Sullivan, G W; Hill, J; Jung, C K; Mauger, C; McGrew, C; Sharkey, E; Viren, B; Yanagisawa, C; Doki, W; Miyano, K; Okazawa, H; Saji, C; Takahata, M; Nagashima, Y; Takita, M; Yamaguchi, T; Yoshida, M; Kim, S B; Etoh, M; Fujita, K; Hasegawa, A; Hasegawa, T; Hatakeyama, S; Iwamoto, T; Koga, M; Maruyama, T; Ogawa, H; Shirai, J; Suzuki, A; Tsushima, F; Koshiba, M; Nemoto, M; Nishijima, K; Futagami, T; Hayato, Y; Kanaya, Y; Kaneyuki, K; Watanabe, Y; Kielczewska, D; Doyle, R A; George, J S; Stachyra, A L; Wai, L L; Wilkes, R J; Young, K K; Kobayashi, H

    1999-01-01

    In order to calibrate the Super-Kamiokande experiment for solar neutrino measurements, a linear accelerator (LINAC) for electrons was installed at the detector. LINAC data were taken at various positions in the detector volume, tracking the detector response in the variables relevant to solar neutrino analysis. In particular, the absolute energy scale is now known with less than 1 percent uncertainty.

  11. A Simple Accelerometer Calibrator

    Science.gov (United States)

    Salam, R. A.; Islamy, M. R. F.; Munir, M. M.; Latief, H.; Irsyam, M.; Khairurrijal

    2016-08-01

    High possibility of earthquake could lead to the high number of victims caused by it. It also can cause other hazards such as tsunami, landslide, etc. In that case it requires a system that can examine the earthquake occurrence. Some possible system to detect earthquake is by creating a vibration sensor system using accelerometer. However, the output of the system is usually put in the form of acceleration data. Therefore, a calibrator system for accelerometer to sense the vibration is needed. In this study, a simple accelerometer calibrator has been developed using 12 V DC motor, optocoupler, Liquid Crystal Display (LCD) and AVR 328 microcontroller as controller system. The system uses the Pulse Wave Modulation (PWM) form microcontroller to control the motor rotational speed as response to vibration frequency. The frequency of vibration was read by optocoupler and then those data was used as feedback to the system. The results show that the systems could control the rotational speed and the vibration frequencies in accordance with the defined PWM.

  12. Effects of High Volume MOSFET Usage on Dosimetry in Pediatric CT, Pediatric Lens of the Eye Dose Reduction Using Siemens Care kV, & Designing Quality Assurance of a Cesium Calibration Source

    Science.gov (United States)

    Smith, Aaron Kenneth

    Project 1: Effects of High Volume MOSFET Usage on Dosimetry in Pediatric CT: Purpose: The objective of this study was to determine if using large numbers of Metal-Oxide-Semiconducting-Field-Effect Transistors, MOSFETs, effects the results of dosimetry studies done with pediatric phantoms due to the attenuation properties of the MOSFETs. The two primary focuses of the study were first to experimentally determine the degree to which high numbers of MOSFET detectors attenuate an X-ray beam of Computed Tomography (CT) quality and second, to experimentally verify the effect that the large number of MOSFETs have on dose in a pediatric phantom undergoing a routine CT examination. Materials and Methods: A Precision X-Ray X-Rad 320 set to 120kVp with an effective half value layer of 7.30mm aluminum was used in concert with a tissue equivalent block phantom and several used MOSFET cables to determine the attenuation properties of the MOSFET cables by measuring the dose (via a 0.18cc ion chamber) given to a point in the center of the phantom in a 0.5 min exposure with a variety of MOSFET arrangements. After the attenuating properties of the cables were known, a GE Discovery 750 CT scanner was employed using a routine chest CT protocol in concert with a 10-year-old Atom Dosimetry Phantom and MOSFET dosimeters in 5 different locations in and on the phantom (upper left lung (ULL), upper right lung (URL), lower left lung (LLL), lower right lung (LRL), and the center of the chest to represent skin dose). Twenty-eight used MOSFET cables were arranged and taped on the chest of the phantom to cover 30% of the circumference of the phantom (19.2 cm). Scans using tube current modulation and not using tube current modulation were taken at 30, 20, 10, and 0% circumference coverage and 28 MOSFETs bundled and laid to the side of the phantom. The dose to the various MOSFET locations in and on the chest were calculated and the image quality was accessed in several of these situations by

  13. Calibration of Correlation Radiometers Using Pseudo-Random Noise Signals

    Directory of Open Access Journals (Sweden)

    Sebastián Pantoja

    2009-08-01

    Full Text Available The calibration of correlation radiometers, and particularly aperture synthesis interferometric radiometers, is a critical issue to ensure their performance. Current calibration techniques are based on the measurement of the cross-correlation of receivers’ outputs when injecting noise from a common noise source requiring a very stable distribution network. For large interferometric radiometers this centralized noise injection approach is very complex from the point of view of mass, volume and phase/amplitude equalization. Distributed noise injection techniques have been proposed as a feasible alternative, but are unable to correct for the so-called “baseline errors” associated with the particular pair of receivers forming the baseline. In this work it is proposed the use of centralized Pseudo-Random Noise (PRN signals to calibrate correlation radiometers. PRNs are sequences of symbols with a long repetition period that have a flat spectrum over a bandwidth which is determined by the symbol rate. Since their spectrum resembles that of thermal noise, they can be used to calibrate correlation radiometers. At the same time, since these sequences are deterministic, new calibration schemes can be envisaged, such as the correlation of each receiver’s output with a baseband local replica of the PRN sequence, as well as new distribution schemes of calibration signals. This work analyzes the general requirements and performance of using PRN sequences for the calibration of microwave correlation radiometers, and particularizes the study to a potential implementation in a large aperture synthesis radiometer using an optical distribution network.

  14. Optical Tweezer Assembly and Calibration

    Science.gov (United States)

    Collins, Timothy M.

    2004-01-01

    An Optical Tweezer, as the name implies, is a useful tool for precision manipulation of micro and nano scale objects. Using the principle of electromagnetic radiation pressure, an optical tweezer employs a tightly focused laser beam to trap and position objects of various shapes and sizes. These devices can trap micrometer and nanometer sized objects. An exciting possibility for optical tweezers is its future potential to manipulate and assemble micro and nano sized sensors. A typical optical tweezer makes use of the following components: laser, mirrors, lenses, a high quality microscope, stage, Charge Coupled Device (CCD) camera, TV monitor and Position Sensitive Detectors (PSDs). The laser wavelength employed is typically in the visible or infrared spectrum. The laser beam is directed via mirrors and lenses into the microscope. It is then tightly focused by a high magnification, high numerical aperture microscope objective into the sample slide, which is mounted on a translating stage. The sample slide contains a sealed, small volume of fluid that the objects are suspended in. The most common objects trapped by optical tweezers are dielectric spheres. When trapped, a sphere will literally snap into and center itself in the laser beam. The PSD s are mounted in such a way to receive the backscatter after the beam has passed through the trap. PSD s used with the Differential Interference Contrast (DIC) technique provide highly precise data. Most optical tweezers employ lasers with power levels ranging from 10 to 100 miliwatts. Typical forces exerted on trapped objects are in the pico-newton range. When PSDs are employed, object movement can be resolved on a nanometer scale in a time range of milliseconds. Such accuracy, however, can only by utilized by calibrating the optical tweezer. Fortunately, an optical tweezer can be modeled accurately as a simple spring. This allows Hook s Law to be used. My goal this summer at NASA Glenn Research Center is the assembly and

  15. Calibration procedure for plasma polarimetry based on the complex amplitude ratio measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bieg, Bohdan, E-mail: b.bieg@am.szczecin.pl [Maritime University, Szczecin (Poland); Kravtsov, Yury A.; Cieplik, Marek [Maritime University, Szczecin (Poland)

    2013-10-15

    New methodology for plasma polarimeters calibration is suggested, based on the complex amplitude ratio (CAR) measurements. This methodology reduces calibration to determination of three complex parameters of transfer matrix, characterizing polarization changes in optical system. Having obtained transfer matrix, full characteristic of the optic system could be obtained: eigenstates, phase shift and relative attenuation of the slow and fast wave. Polarization state of the sounding electromagnetic wave after the plasma is determined from the measured complex amplitude ratio by simple inversion of transfer matrix. Calibration procedure under discussion is simpler, more transparent and reliable than traditional procedures, using Stokes vector technique or angular parameters of polarization ellipse.

  16. A normative price for energy from an electricity generation system: An Owner-dependent Methodology for Energy Generation (system) Assessment (OMEGA). Volume 2: Derivation of system energy price equations

    Science.gov (United States)

    Chamberlain, R. G.; Mcmaster, K. M.

    1981-01-01

    The methodology presented is a derivation of the utility owned solar electric systems model. The net present value of the system is determined by consideration of all financial benefits and costs including a specified return on investment. Life cycle costs, life cycle revenues, and residual system values are obtained. Break-even values of system parameters are estimated by setting the net present value to zero.

  17. Internal Water Vapor Photoacoustic Calibration

    Science.gov (United States)

    Pilgrim, Jeffrey S.

    2009-01-01

    Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.

  18. Small-Volume Injections: Evaluation of Volume Administration Deviation From Intended Injection Volumes.

    Science.gov (United States)

    Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B

    2017-10-01

    In the perioperative period, anesthesiologists and postanesthesia care unit (PACU) nurses routinely prepare and administer small-volume IV injections, yet the accuracy of delivered medication volumes in this setting has not been described. In this ex vivo study, we sought to characterize the degree to which small-volume injections (≤0.5 mL) deviated from the intended injection volumes among a group of pediatric anesthesiologists and pediatric postanesthesia care unit (PACU) nurses. We hypothesized that as the intended injection volumes decreased, the deviation from those intended injection volumes would increase. Ten attending pediatric anesthesiologists and 10 pediatric PACU nurses each performed a series of 10 injections into a simulated patient IV setup. Practitioners used separate 1-mL tuberculin syringes with removable 18-gauge needles (Becton-Dickinson & Company, Franklin Lakes, NJ) to aspirate 5 different volumes (0.025, 0.05, 0.1, 0.25, and 0.5 mL) of 0.25 mM Lucifer Yellow (LY) fluorescent dye constituted in saline (Sigma Aldrich, St. Louis, MO) from a rubber-stoppered vial. Each participant then injected the specified volume of LY fluorescent dye via a 3-way stopcock into IV tubing with free-flowing 0.9% sodium chloride (10 mL/min). The injected volume of LY fluorescent dye and 0.9% sodium chloride then drained into a collection vial for laboratory analysis. Microplate fluorescence wavelength detection (Infinite M1000; Tecan, Mannedorf, Switzerland) was used to measure the fluorescence of the collected fluid. Administered injection volumes were calculated based on the fluorescence of the collected fluid using a calibration curve of known LY volumes and associated fluorescence.To determine whether deviation of the administered volumes from the intended injection volumes increased at lower injection volumes, we compared the proportional injection volume error (loge [administered volume/intended volume]) for each of the 5 injection volumes using a linear

  19. Volume Entropy

    CERN Document Server

    Astuti, Valerio; Rovelli, Carlo

    2016-01-01

    Building on a technical result by Brunnemann and Rideout on the spectrum of the Volume operator in Loop Quantum Gravity, we show that the dimension of the space of the quadrivalent states --with finite-volume individual nodes-- describing a region with total volume smaller than $V$, has \\emph{finite} dimension, bounded by $V \\log V$. This allows us to introduce the notion of "volume entropy": the von Neumann entropy associated to the measurement of volume.

  20. RX130 Robot Calibration

    Science.gov (United States)

    Fugal, Mario

    2012-10-01

    In order to create precision magnets for an experiment at Oak Ridge National Laboratory, a new reverse engineering method has been proposed that uses the magnetic scalar potential to solve for the currents necessary to produce the desired field. To make the magnet it is proposed to use a copper coated G10 form, upon which a drill, mounted on a robotic arm, will carve wires. The accuracy required in the manufacturing of the wires exceeds nominal robot capabilities. However, due to the rigidity as well as the precision servo motor and harmonic gear drivers, there are robots capable of meeting this requirement with proper calibration. Improving the accuracy of an RX130 to be within 35 microns (the accuracy necessary of the wires) is the goal of this project. Using feedback from a displacement sensor, or camera and inverse kinematics it is possible to achieve this accuracy.

  1. Radiological Calibration and Standards Facility

    Data.gov (United States)

    Federal Laboratory Consortium — PNNL maintains a state-of-the-art Radiological Calibration and Standards Laboratory on the Hanford Site at Richland, Washington. Laboratory staff provide expertise...

  2. Field calibration of cup anemometers

    DEFF Research Database (Denmark)

    Kristensen, L.; Jensen, G.; Hansen, A.

    2001-01-01

    An outdoor calibration facility for cup anemometers, where the signals from 10 anemometers of which at least one is a reference can be can be recorded simultaneously, has been established. The results are discussed with special emphasis on the statisticalsignificance of the calibration expressions....... It is concluded that the method has the advantage that many anemometers can be calibrated accurately with a minimum of work and cost. The obvious disadvantage is that the calibration of a set of anemometersmay take more than one month in order to have wind speeds covering a sufficiently large magnitude range...

  3. Correction volumes and densities in Vitrea Program; Correcao de volumes e densidades no Programa Vitrea

    Energy Technology Data Exchange (ETDEWEB)

    Abrantes, Marcos E.S.; Oliveira, A.H. de, E-mail: marcosabrantes2003@yahoo.com.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear; Abrantes, R.C., E-mail: abrantes.rafa1@gmail.com [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Eletrica; Magalhaes, M.J., E-mail: mjuliano100@yahoo.com.br [Ambienttal Protecao Radiologica, Belo Horizonte, MG (Brazil)

    2014-07-01

    Introduction: with the increased use of 3D reconstruction techniques to assist in diagnosis, Vitrea® program is widely used. To use this program you need to know the correction values to generate the volumes and number of real CT human tissues. Objective: provide correction values for volumes and number of CT, read the Vitrea program, of the tissues generated by DICOM images from CT. Methodology: this study used a PMMA chest phantom to generate the DICOM images on a scanner. To check the calibration of the scanner was used Catphan phantom and compared the manufacturer of the values associated with its straight linearity. Results: the volume of PMMA phantom was of 11166.58 cm³ and CT number (123.5 ± 33.4) UH. For the volume found in Vitrea program, according to the structures of interest, were 11897.29 cm{sup 3}, 10901.65 cm³, 16906.49 cm{sup 3} and 11848.34 cm³ and corrections values are -6.14%, + 2.43% -6.94% -5.75% respectively for the tissues: lung, bone, soft and full. For the CT numbers found in this program were (97.60 ± 58.9) UH, (72.00 ± 176.00) UH, (143.20 ± 19.50) UH and (31.90 ± 239,10) UH and corrections of + 26.54%, + 71.53%, -13.64% and 387.15% respectively for tissues: lung, bone, soft and full. Conclusion: the procedure performed can be used in other 3D reconstruction programs and where there are tools to reading CT number, observing the necessary corrections.

  4. Calibration of Nanopositioning Stages

    Directory of Open Access Journals (Sweden)

    Ning Tan

    2015-12-01

    Full Text Available Accuracy is one of the most important criteria for the performance evaluation of micro- and nanorobots or systems. Nanopositioning stages are used to achieve the high positioning resolution and accuracy for a wide and growing scope of applications. However, their positioning accuracy and repeatability are not well known and difficult to guarantee, which induces many drawbacks for many applications. For example, in the mechanical characterisation of biological samples, it is difficult to perform several cycles in a repeatable way so as not to induce negative influences on the study. It also prevents one from controlling accurately a tool with respect to a sample without adding additional sensors for closed loop control. This paper aims at quantifying the positioning repeatability and accuracy based on the ISO 9283:1998 standard, and analyzing factors influencing positioning accuracy onto a case study of 1-DoF (Degree-of-Freedom nanopositioning stage. The influence of thermal drift is notably quantified. Performances improvement of the nanopositioning stage are then investigated through robot calibration (i.e., open-loop approach. Two models (static and adaptive models are proposed to compensate for both geometric errors and thermal drift. Validation experiments are conducted over a long period (several days showing that the accuracy of the stage is improved from typical micrometer range to 400 nm using the static model and even down to 100 nm using the adaptive model. In addition, we extend the 1-DoF calibration to multi-DoF with a case study of a 2-DoF nanopositioning robot. Results demonstrate that the model efficiently improved the 2D accuracy from 1400 nm to 200 nm.

  5. Tourism Methodologies - New Perspectives, Practices and Procedures

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... in interview and field work situations, and how do we engage with the performative aspects of tourism as a field of study? The book acknowledges that research is also performance and that it constitutes an aspect of intervention in the situations and contexts it is trying to explore. This is an issue dealt...

  6. A new phantom for image quality, geometric destortion, and HU calibration in MSCT and CBCT

    Science.gov (United States)

    Voigt, Johannes M.; Blendl, Christian; Selbach, Markus; Uphoff, Clemens; Fiebich, Martin

    2012-03-01

    Flat panel cone-beam computed tomography (CBCT) is developing to the state-of-the-art technique in several medical disciplines such as dental and otorhinolaryngological imaging. Dental and otorhinolaryngological CBCT systems offer a variety of different field-of-view sizes from 6.0 to 17.0 cm. Standard phantoms are only designed for the use in multi-slices CT (MSCT) and there is no phantom which provides detail structures for all common characteristic values and Hounsfield calibration. In this study we present a new phantom specially designed for use with MSCT and CBCT systems providing detail structures for MTF, 3D MTF, NPS, SNR, geometric distortion and HU calibration. With this phantom you'll only need one acquisition for image quality investigation and assurance. Materials and methods: The phantom design is shown in figure 1. To investigate the practicability, the phantom was scanned using dedicated MSCT-scanners, 3D C-arms und digital volume tomographs. The acquired axial image stacks were analyzed using a dedicated computer program, which is provided as an ImageJ plugin. The MTF was compared to other methodologies such as a thin wire, a sphere or noise response [10, 13, 14]. The HU values were also computed using other common methods. Results: These results are similar to the results of others studies [10, 13, 14]. The method has proven to be stable and delivers comparable results to other methodologies such as using a thin wire. The NPS was calculated for all materials. Furthermore, CT numbers for all materials were computed and compared to the desired values. The measurement of geometric deformation has proven to be accurate. Conclusion: A unique feature of this phantom is to compute the geometric deformation of the 3D-volume image. This offers the chance to improve accuracy, e.g. in dental implant planning. Another convenient feature is that the phantom needs to be scanned only once with otorhinolaryngological volume tomographs to be fully displayed. It is

  7. Sub-daily runoff predictions using parameters calibrated on the basis of data with a daily temporal resolution

    Science.gov (United States)

    Reynolds, J. E.; Halldin, S.; Xu, C. Y.; Seibert, J.; Kauffeldt, A.

    2017-07-01

    Concentration times in small and medium-sized basins (∼10-1000 km2) are commonly less than 24 h. Flood-forecasting models are thus required to provide simulations at high temporal resolutions (1 h-6 h), although time-series of input and runoff data with sufficient lengths are often only available at the daily temporal resolution, especially in developing countries. This has led to study the relationships of estimated parameter values at the temporal resolutions where they are needed from the temporal resolutions where they are available. This study presents a methodology to treat empirically model-parameter dependencies on the temporal resolution of data in two small basins using a bucket-type hydrological model, HBV-light, and the generalised likelihood uncertainty estimation approach for selecting its parameters. To avoid artefacts due to the numerical resolution or numerical method of the differential equations within the model, the model was consistently run using modelling time-steps of one-hour regardless of the temporal resolution of the rainfall-runoff data. The distribution of the parameters calibrated at several temporal resolutions in the two basins did not show model-parameter dependencies on the temporal resolution of data and the direct transferability of calibrated parameter sets (e.g., daily) for runoff simulations at other temporal resolutions for which they were not calibrated (e.g., 3 h or 6 h) resulted in a moderate (if any) decrease in model performance, in terms of Nash-Sutcliffe and volume-error efficiencies. The results of this study indicate that if sub-daily forcing data can be secured, flood forecasting in basins with sub-daily concentration times may be possible with model-parameter values calibrated from long time series of daily data. Further studies using more models and basins are required to test the generality of these results.

  8. Tectonic calibrations in molecular dating

    Institute of Scientific and Technical Information of China (English)

    Ullasa KODANDARAMAIAH

    2011-01-01

    Molecular dating techniques require the use of calibrations, which are usually fossil or geological vicariance-based.Fossil calibrations have been criticised because they result only in minimum age estimates. Based on a historical biogeographic perspective, Ⅰ suggest that vicariance-based calibrations are more dangerous. Almost all analytical methods in historical biogeography are strongly biased towards inferring vicariance, hence vicariance identified through such methods is unreliable. Other studies, especially of groups found on Gondwanan fragments, have simply assumed vicariance. Although it was previously believed that vicariance was the predominant mode of speciation, mounting evidence now indicates that speciation by dispersal is common, dominating vicariance in several groups. Moreover, the possibility of speciation having occurred before the said geological event cannot be precluded. Thus, geological calibrations can under- or overestimate times, whereas fossil calibrations always result in minimum estimates. Another major drawback of vicariant calibrations is the problem of circular reasoning when the resulting estimates are used to infer ages of biogeographic events. Ⅰ argue that fossil-based dating is a superior alternative to vicariance, primarily because the strongest assumption in the latter, that speciation was caused by the said geological process, is more often than not the most tenuous. When authors prefer to use a combination of fossil and vicariant calibrations, one suggestion is to report results both with and without inclusion of the geological constraints. Relying solely on vicariant calibrations should be strictly avoided.

  9. UVIS G280 Wavelength Calibration

    Science.gov (United States)

    Bushouse, Howard

    2009-07-01

    Wavelength calibration of the UVIS G280 grism will be established using observations of the Wolf Rayet star WR14. Accompanying direct exposures will provide wavelength zeropoints for dispersed exposures. The calibrations will be obtained at the central position of each CCD chip and at the center of the UVIS field. No additional field-dependent variations will be obtained.

  10. Evaluation of methodologies for interpolation of data for hydrological modeling in glacierized basins with limited information

    Science.gov (United States)

    Muñoz, Randy; Paredes, Javier; Huggel, Christian; Drenkhan, Fabian; García, Javier

    2017-04-01

    of precipitation in high-altitudinal zones, and 2) ordinary Kriging (OK) whose variograms were calculated with the multi-annual monthly mean precipitation applying them to the whole study period. OK leads to better results in both low and high altitudinal zones. For ice volume, the aim was to estimate values from historical data: 1) with the GlabTop algorithm which needs digital elevation models, but these are available in an appropriate scale since 2009, 2) with a widely applied but controversially discussed glacier area-volume relation whose parameters were calibrated with results from the GlabTop model. Both methodologies provide reasonable results, but for historical data, the area-volume scaling only requires the glacial area easy to calculate from satellite images since 1986. In conclusion, the simple correlation, the OK and the calibrated relation for ice volume showed the best ways to interpolate glacio-climatic information. However, these methods must be carefully applied and revisited for the specific situation with high complexity. This is a first step in order to identify the most appropriate methods to interpolate and extend observed data in glacierized basins with limited information. New research should be done evaluating another methodologies and meteorological data in order to improve hydrological models and water management policies.

  11. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

    Directory of Open Access Journals (Sweden)

    Andreas König

    2009-11-01

    Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.

  12. Calibrating a large slab vessel: A battle of the bulge

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, I.R. [Westinghouse Idaho Nuclear Co., Inc., Idaho Falls, ID (United States). Safeguards and Security Section

    1993-12-31

    The accurate measurement of volume in slab vessels can be difficult because slab vessels expand--in spite of internal or external supports--as they are filled. One form of bulging is elastic deflection, a gradual expansion of the vessel wall resulting from an increased weight of contained solution. As part of an upgrade to the Idaho Chemical Processing Plant, slab tanks were proposed as accountability measurement vessels. A 1960 liter slab tank prototype was set up for preliminary calibrations. Two series of calibrations were conducted: the first using water, and the second using aluminum nitrate. It was conjectured that the increased weight of aluminum nitrate would cause the vessel walls to deflect more than they did for an equal level of water, resulting in a greater volume. As expected, a significant expansion was observed with the aluminum nitrate, but some of the deflection proved to be permanent rather than elastic. The consequence is that considerably more effort will be required to calibrate slab vessels for uranium accountability. Not only must a calibration curve (or family of curves) be developed giving volume as a function of both liquid level and density, but, if possible, a determination must be made as to when the deflection is no longer temporary.

  13. Nitric Oxide Measurement Study. Volume I. Optical Calibration

    Science.gov (United States)

    1979-10-18

    Computation of Doppler Absorption Coefficient The line center absorption coefficient for a Doppler broadened line (ks) is given by Mitchell and Zemansky ...broaden- ing. The theories of pressure broadening are discussed in some detail by Breene (1957), Hindmarsh and Farr (1972), and Mitchell and Zemansky ...collisions, AvL, is given by Mitchell and Zemansky (1971), ZL AvL = (38’ TrC irc where ZL is the collision frequency of a single molecule, c is the velocity

  14. Nitric Oxide Measurement Study: Volume I. Optical Calibration,

    Science.gov (United States)

    1980-05-01

    Doppler broadened line (k""i) is given by Mitchell and Zemansky (1971), 2 k 2 e Z In 2 _N(n"y" 7 ",1" n") f V’V" j’J" (25)1 Me C(A aV )a1 where e is the...broaden- ing. The theories of pressure broadening are discussed in some detail by Breene (1957), Hindmarsh and Farr (1972), and Mitchell and Zemansky ...collisions, AVL, is given by Mitchell and Zemansky (1971), AvL = L (38) Trc where ZL is the collision frequency of a single molecule, c is the velocity

  15. Precision Measurement and Calibration. Volume 1. Statistical Concepts and Procedures

    Science.gov (United States)

    1969-02-01

    Sons, New York. N.Y.). Shewhart, Walter A. (19t39), Statistical Method frin the Galilei , Galileo (1638), Discorsi e Dirnostrazioni Matema- Viewpoint... Galilei . Galileo (1898). Discorsi e Dimostraziorii Matemnaticlie Shiewhart, Walter A. (1941), (Contribiution of statistics to Initorno a Dine Ninove Scienze...Le Opere i Galileo Galilei the science of engineering, University of Pennsylvania Bi- (Edizione Nazionale) Vill, pp 39-448, Firenze. centennial

  16. Cobalt source calibration

    Energy Technology Data Exchange (ETDEWEB)

    Rizvi, H.M.

    1999-12-03

    The data obtained from these tests determine the dose rate of the two cobalt sources in SRTC. Building 774-A houses one of these sources while the other resides in room C-067 of Building 773-A. The data from this experiment shows the following: (1) The dose rate of the No.2 cobalt source in Building 774-A measured 1.073 x 10{sup 5} rad/h (June 17, 1999). The dose rate of the Shepherd Model 109 Gamma cobalt source in Building 773-A measured 9.27 x 10{sup 5} rad/h (June 25, 1999). These rates come from placing the graduated cylinder containing the dosimeter solution in the center of the irradiation chamber. (2) Two calibration tests in the 774-A source placed the graduated cylinder with the dosimeter solution approximately 1.5 inches off center in the axial direction. This movement of the sample reduced the measured dose rate 0.92% from 1.083 x 10{sup 5} rad/h to 1.073 x 10{sup 5} rad/h. and (3) A similar test in the cobalt source in 773-A placed the graduated cylinder approximately 2.0 inches off center in the axial direction. This change in position reduced the measured dose rate by 10.34% from 1.036 x 10{sup 6} to 9.27 x 10{sup 5}. This testing used chemical dosimetry to measure the dose rate of a radioactive source. In this method, one determines the dose by the chemical change that takes place in the dosimeter. For this calibration experiment, the author used a Fricke (ferrous ammonium sulfate) dosimeter. This solution works well for dose rates to 10{sup 7} rad/h. During irradiation of the Fricke dosimeter solution the Fe{sup 2+} ions ionize to Fe{sup 3+}. When this occurs, the solution acquires a slightly darker tint (not visible to the human eye). To determine the magnitude of the change in Fe ions, one places the solution in an UV-VIS Spectrophotometer. The UV-VIS Spectrophotometer measures the absorbency of the solution. Dividing the absorbency by the total time (in minutes) of exposure yields the dose rate.

  17. Automated calibration of multistatic arrays

    Energy Technology Data Exchange (ETDEWEB)

    Henderer, Bruce

    2017-03-14

    A method is disclosed for calibrating a multistatic array having a plurality of transmitter and receiver pairs spaced from one another along a predetermined path and relative to a plurality of bin locations, and further being spaced at a fixed distance from a stationary calibration implement. A clock reference pulse may be generated, and each of the transmitters and receivers of each said transmitter/receiver pair turned on at a monotonically increasing time delay interval relative to the clock reference pulse. Ones of the transmitters and receivers may be used such that a previously calibrated transmitter or receiver of a given one of the transmitter/receiver pairs is paired with a subsequently un-calibrated one of the transmitters or receivers of an immediately subsequently positioned transmitter/receiver pair, to calibrate the transmitter or receiver of the immediately subsequent transmitter/receiver pair.

  18. Liquid Krypton Calorimeter Calibration Software

    CERN Document Server

    Hughes, Christina Lindsay

    2013-01-01

    Calibration of the liquid krypton calorimeter (LKr) of the NA62 experiment is managed by a set of standalone programs, or an online calibration driver. These programs are similar to those used by NA48, but have been updated to utilize classes and translated to C++ while maintaining a common functionality. A set of classes developed to handle communication with hardware was used to develop the three standalone programs as well as the main driver program for online calibration between bursts. The main calibration driver has been designed to respond to run control commands and receive burst data, both transmitted via DIM. In order to facilitate the process of reading in calibration parameters, a serializable class has been introduced, allowing the replacement of standard text files with XML configuration files.

  19. TIME CALIBRATED OSCILLOSCOPE SWEEP CIRCUIT

    Science.gov (United States)

    Smith, V.L.; Carstensen, H.K.

    1959-11-24

    An improved time calibrated sweep circuit is presented, which extends the range of usefulness of conventional oscilloscopes as utilized for time calibrated display applications in accordance with U. S. Patent No. 2,832,002. Principal novelty resides in the provision of a pair of separate signal paths, each of which is phase and amplitude adjustable, to connect a high-frequency calibration oscillator to the output of a sawtooth generator also connected to the respective horizontal deflection plates of an oscilloscope cathode ray tube. The amplitude and phase of the calibration oscillator signals in the two signal paths are adjusted to balance out feedthrough currents capacitively coupled at high frequencies of the calibration oscillator from each horizontal deflection plate to the vertical plates of the cathode ray tube.

  20. The Advanced LIGO Photon Calibrators

    CERN Document Server

    Karki, S; Kandhasamy, S; Abbott, B P; Abbott, T D; Anders, E H; Berliner, J; Betzwieser, J; Daveloza, H P; Cahillane, C; Canete, L; Conley, C; Gleason, J R; Goetz, E; Kissel, J S; Izumi, K; Mendell, G; Quetschke, V; Rodruck, M; Sachdev, S; Sadecki, T; Schwinberg, P B; Sottile, A; Wade, M; Weinstein, A J; West, M; Savage, R L

    2016-01-01

    The two interferometers of the Laser Interferometry Gravitaional-wave Observatory (LIGO) recently detected gravitational waves from the mergers of binary black hole systems. Accurate calibration of the output of these detectors was crucial for the observation of these events, and the extraction of parameters of the sources. The principal tools used to calibrate the responses of the second-generation (Advanced) LIGO detectors to gravitational waves are systems based on radiation pressure and referred to as Photon Calibrators. These systems, which were completely redesigned for Advanced LIGO, include several significant upgrades that enable them to meet the calibration requirements of second-generation gravitational wave detectors in the new era of gravitational-wave astronomy. We report on the design, implementation, and operation of these Advanced LIGO Photon Calibrators that are currently providing fiducial displacements on the order of $10^{-18}$ m/$\\sqrt{\\textrm{Hz}}$ with accuracy and precision of better ...

  1. The Advanced LIGO photon calibrators

    Science.gov (United States)

    Karki, S.; Tuyenbayev, D.; Kandhasamy, S.; Abbott, B. P.; Abbott, T. D.; Anders, E. H.; Berliner, J.; Betzwieser, J.; Cahillane, C.; Canete, L.; Conley, C.; Daveloza, H. P.; De Lillo, N.; Gleason, J. R.; Goetz, E.; Izumi, K.; Kissel, J. S.; Mendell, G.; Quetschke, V.; Rodruck, M.; Sachdev, S.; Sadecki, T.; Schwinberg, P. B.; Sottile, A.; Wade, M.; Weinstein, A. J.; West, M.; Savage, R. L.

    2016-11-01

    The two interferometers of the Laser Interferometry Gravitational-wave Observatory (LIGO) recently detected gravitational waves from the mergers of binary black hole systems. Accurate calibration of the output of these detectors was crucial for the observation of these events and the extraction of parameters of the sources. The principal tools used to calibrate the responses of the second-generation (Advanced) LIGO detectors to gravitational waves are systems based on radiation pressure and referred to as photon calibrators. These systems, which were completely redesigned for Advanced LIGO, include several significant upgrades that enable them to meet the calibration requirements of second-generation gravitational wave detectors in the new era of gravitational-wave astronomy. We report on the design, implementation, and operation of these Advanced LIGO photon calibrators that are currently providing fiducial displacements on the order of 1 0-18m /√{Hz } with accuracy and precision of better than 1%.

  2. Antenna Calibration and Measurement Equipment

    Science.gov (United States)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  3. A calibrated Franklin chimes

    Science.gov (United States)

    Gonta, Igor; Williams, Earle

    1994-05-01

    Benjamin Franklin devised a simple yet intriguing device to measure electrification in the atmosphere during conditions of foul weather. He constructed a system of bells, one of which was attached to a conductor that was suspended vertically above his house. The device is illustrated in a well-known painting of Franklin (Cohen, 1985). The elevated conductor acquired a potential due to the electric field in the atmosphere and caused a brass ball to oscillate between two bells. The purpose of this study is to extend Franklin's idea by constructing a set of 'chimes' which will operate both in fair and in foul weather conditions. In addition, a mathematical relationship will be established between the frequency of oscillation of a metallic sphere in a simplified geometry and the potential on one plate due to the electrification of the atmosphere. Thus it will be possible to calibrate the 'Franklin Chimes' and to obtain a nearly instantaneous measurement of the potential of the elevated conductor in both fair and foul weather conditions.

  4. Site characterization for calibration of radiometric sensors using vicarious method

    Science.gov (United States)

    Parihar, Shailesh; Rathore, L. S.; Mohapatra, M.; Sharma, A. K.; Mitra, A. K.; Bhatla, R.; Singh, R. S.; Desai, Yogdeep; Srivastava, Shailendra S.

    2016-05-01

    Radiometric performances of earth observation satellite/sensors vary from ground pre-launch calibration campaign to post launch period extended to lifetime of the satellite due to launching vibrations. Therefore calibration is carried out worldwide through various methods throughout satellite lifetime. In India Indian Space Research Organization (ISRO) calibrates the sensor of Resourcesat-2 satellite by vicarious method. One of these vicarious calibration methods is the reflectance-based approach that is applied in this study for radiometric calibration of sensors on-board Resouresat-2 satellite. The results of ground-based measurement of atmospheric conditions and surface reflectance are made at Bap, Rajasthan Calibration/Validation (Cal/Val) site. Cal/Val observations at site were carried out with hyper-spectral Spectroradiometer covering spectral range of 350nm- 2500nm for radiometric characterization of the site. The Sunphotometer/Ozonometer for measuring the atmospheric parameters has also been used. The calibrated radiance is converted to absolute at-sensor spectral reflectance and Top-Of-Atmosphere (TOA) radiance. TOA radiance was computed using radiative transfer model `Second simulation of the satellite signal in the solar spectrum' (6S), which can accurately simulate the problems introduced by the presence of the atmosphere along the path from Sun to target (surface) to Sensor. The methodology for band averaged reflectance retrieval and spectral reflectance fitting process are described. Then the spectral reflectance and atmospheric parameters are put into 6S code to predict TOA radiance which compare with Resourcesat-2 radiance. Spectral signature and its reflectance ratio indicate the uniformity of the site. Thus the study proves that the selected site is suitable for vicarious calibration of sensor of Resourcesat-2. Further the study demonstrates the procedure for similar exercise for site selection for Cal/Val analysis of other satellite over India

  5. Challenges in X-band Weather Radar Data Calibration

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Rasmussen, Michael R.

    2009-01-01

    Application of weather radar data in urban hydrology is evolving and radar data is now applied for both modelling, analysis and real time control purposes. In these contexts, it is all-important that the radar data well calibrated and adjusted in order to obtain valid quantitative precipitation...... estimates. This paper compares two calibration procedures for a small marine X-band radar by comparing radar data with rain gauge data. Validation shows a very good consensus with regards to precipitation volumes, but more diverse results on peak rain intensities....

  6. New methods applicable for calibration of indicator electrodes.

    Science.gov (United States)

    Michałowski, Tadeusz; Pilarski, Bogusław; Ponikvar-Svet, Maja; Asuero, Agustin G; Kukwa, Agata; Młodzianowski, Janusz

    2011-02-15

    The new methods applicable for calibration of indicator electrodes, based on standard addition and standard subtraction methods, are suggested. Some of the methods enable the slope of an indicator electrode and equivalence volume V(eq) to be determined simultaneously from a single set of potentiometric titration data. Some other methods known hitherto were also taken into account. A new model, based on a standard addition method, applicable also in nonlinear range for the ISE slope (S) is suggested, and its applicability was confirmed experimentally in calibration of calcium ISE. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. THE AGILE METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Charul Deewan

    2012-09-01

    Full Text Available The technologies are numerous and Software is the one whichis most widely used. Some companies have their owncustomized methodology for developing their software but themajority speaks about two kinds of methodologies: Traditionaland Agile methodologies. In this paper, we will discuss someof the aspects of what Agile methodology is, how it can beused to get the best result from a project, how do we get it towork in an organization.

  8. Health and safety impacts of nuclear, geothermal, and fossil-fuel electric generation in California. Volume 9. Methodologies for review of the health and safety aspects of proposed nuclear, geothermal, and fossil-fuel sites and facilities

    Energy Technology Data Exchange (ETDEWEB)

    Nero, A.V.; Quinby-Hunt, M.S.

    1977-01-01

    This report sets forth methodologies for review of the health and safety aspects of proposed nuclear, geothermal, and fossil-fuel sites and facilities for electric power generation. The review is divided into a Notice of Intention process and an Application for Certification process, in accordance with the structure to be used by the California Energy Resources Conservation and Development Commission, the first emphasizing site-specific considerations, the second examining the detailed facility design as well. The Notice of Intention review is divided into three possible stages: an examination of emissions and site characteristics, a basic impact analysis, and an assessment of public impacts. The Application for Certification review is divided into five possible stages: a review of the Notice of Intention treatment, review of the emission control equipment, review of the safety design, review of the general facility design, and an overall assessment of site and facility acceptability.

  9. Language Policy and Methodology

    Science.gov (United States)

    Liddicoat, Antony J.

    2004-01-01

    The implementation of a language policy is crucially associated with questions of methodology. This paper explores approaches to language policy, approaches to methodology and the impact that these have on language teaching practice. Language policies can influence decisions about teaching methodologies either directly, by making explicit…

  10. Mercury Continuous Emmission Monitor Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani

    2009-03-12

    Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma

  11. Calibration procedure for a laser triangulation scanner with uncertainty evaluation

    Science.gov (United States)

    Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio

    2016-11-01

    Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.

  12. Adaptive calibration of (u,v)‐wind ensemble forecasts

    DEFF Research Database (Denmark)

    Pinson, Pierre

    2012-01-01

    Ensemble forecasts of (u,v)‐wind are of crucial importance for a number of decision‐making problems related to e.g. air traffic control, ship routeing and energy management. The skill of these ensemble forecasts as generated by NWP‐based models can be maximised by correcting for their lack...... of sufficient reliability. The original framework introduced here allows for an adaptive bivariate calibration of these ensemble forecasts. The originality of this methodology lies in the fact that calibrated ensembles still consist of a set of (space–time) trajectories, after translation and dilation...... on the adaptive calibration of ECMWF ensemble forecasts of (u,v)‐wind at 10 m above ground level over Europe over a three‐year period between December 2006 and December 2009. Substantial improvements in (bivariate) reliability and in various deterministic/probabilistic scores are observed. Finally, the maps...

  13. Capital Structure Arbitrage under a Risk-Neutral Calibration

    Directory of Open Access Journals (Sweden)

    Peter J. Zeitsch

    2017-01-01

    Full Text Available By reinterpreting the calibration of structural models, a reassessment of the importance of the input variables is undertaken. The analysis shows that volatility is the key parameter to any calibration exercise, by several orders of magnitude. To maximize the sensitivity to volatility, a simple formulation of Merton’s model is proposed that employs deep out-of-the-money option implied volatilities. The methodology also eliminates the use of historic data to specify the default barrier, thereby leading to a full risk-neutral calibration. Subsequently, a new technique for identifying and hedging capital structure arbitrage opportunities is illustrated. The approach seeks to hedge the volatility risk, or vega, as opposed to the exposure from the underlying equity itself, or delta. The results question the efficacy of the common arbitrage strategy of only executing the delta hedge.

  14. Calibration of circular aperture area using vision probe at inmetro

    Directory of Open Access Journals (Sweden)

    Costa Pedro Bastos

    2016-01-01

    Full Text Available Circular aperture areas are standards of high importance for the realization of photometric and radiometric measurements, where the accuracy of these measures is related to the accuracy of the circular aperture area calibrations. In order to attend the requirement for traceability was developed in Brazilian metrology institute, a methodology for circular aperture area measurement as requirements from the radiometric and photometric measurements. In the developed methodology apertures are measured by non-contact measurement through images of the aperture edges captured by a camera. These images are processed using computer vision techniques and then the values of the circular aperture area are determined.

  15. A phantom-based method to standardize dose-calibrators for new β+-emitters.

    Science.gov (United States)

    Morgat, Clément; Mazère, Joachim; Fernandez, Philippe; Buj, Sébastien; Vimont, Delphine; Schulz, Jürgen; Lamare, Frédéric

    2015-02-01

    Quantitative imaging with PET requires accurate measurements of the amount of radioactivity injected into the patient and the concentration of radioactivity in a given region. Recently, new positron emitters, such as (124)I, (89)Zr, (82)Rb, (68)Ga, and (64)Cu, have emerged to promote PET development, but standards are still largely lacking. Therefore, we propose to validate a simple, robust, and replicable methodology, not requiring the use of any standards, to accurately calibrate a dose-calibrator for any β(+)-emitter. On the basis of (18)F cross-calibration, routinely performed with fluorine-18-fluorodeoxyglucose (F-FDG) in nuclear medicine departments, a methodology was developed using β(+)-emitting' phantoms to cross-calibrate the dose-calibrator for measuring the activity of positron emitters and quantifying the standardized uptake value (SUV). Ga phantoms filled with activities measured with various dose-calibrator settings were imaged to establish calibration curves (SUV values as a function of the dose-calibrator settings) and to identify the setting value, yielding an SUV value of 1.00 g/ml, reflecting an accurate measurement of (68)Ga activity. Activities measured with the identified setting were finally checked with a γ-counter. The setting of 772±1 was identified as ensuring that the studied dose-calibrator is correctly calibrated for (68)Ga to ensure an SUV value of 1.00±0.01 g/ml. γ-Ray spectrometry confirmed the accurate measurement of Ga activities by the dose-calibrator (relative error of 2.9±1.5%). We have developed a phantom-based method to accurately standardize dose-calibrators for any β(+)-emitter, without any standards.

  16. Mexican national pyronometer network calibration

    Science.gov (United States)

    VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.

    2013-12-01

    In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.

  17. Reachable volume RRT

    KAUST Repository

    McMahon, Troy

    2015-05-01

    © 2015 IEEE. Reachable volumes are a new technique that allows one to efficiently restrict sampling to feasible/reachable regions of the planning space even for high degree of freedom and highly constrained problems. However, they have so far only been applied to graph-based sampling-based planners. In this paper we develop the methodology to apply reachable volumes to tree-based planners such as Rapidly-Exploring Random Trees (RRTs). In particular, we propose a reachable volume RRT called RVRRT that can solve high degree of freedom problems and problems with constraints. To do so, we develop a reachable volume stepping function, a reachable volume expand function, and a distance metric based on these operations. We also present a reachable volume local planner to ensure that local paths satisfy constraints for methods such as PRMs. We show experimentally that RVRRTs can solve constrained problems with as many as 64 degrees of freedom and unconstrained problems with as many as 134 degrees of freedom. RVRRTs can solve problems more efficiently than existing methods, requiring fewer nodes and collision detection calls. We also show that it is capable of solving difficult problems that existing methods cannot.

  18. Solar Cell Calibration and Measurement Techniques

    Science.gov (United States)

    Bailey, Sheila; Brinker, Dave; Curtis, Henry; Jenkins, Phillip; Scheiman, Dave

    2004-01-01

    The increasing complexity of space solar cells and the increasing international markets for both cells and arrays has resulted in workshops jointly sponsored by NASDA, ESA and NASA. These workshops are designed to obtain international agreement on standardized values for the AMO spectrum and constant, recommend laboratory measurement practices and establish a set of protocols for international comparison of laboratory measurements. A working draft of an ISO standard, WD15387, "Requirements for Measurement and Calibration Procedures for Space Solar Cells" was discussed with a focus on the scope of the document, a definition of primary standard cell, and required error analysis for all measurement techniques. Working groups addressed the issues of Air Mass Zero (AMO) solar constant and spectrum, laboratory measurement techniques, and te international round robin methodology. A summary is presented of the current state of each area and the formulation of the ISO document.

  19. Calibrating System for Vacuum Gauges

    Institute of Scientific and Technical Information of China (English)

    MengJun; YangXiaotian; HaoBinggan; HouShengjun; HuZhenjun

    2003-01-01

    In order to measure the vacuum degree, a lot of vacuum gauges will be used in CSR vacuum system. We bought several types of vacuum gauges. We know that different typos of vacuum gauges or even one type of vacuum gauges have different measure results in same condition, so they must be calibrated. But it seems impossible for us to send so many gauges to the calibrating station outside because of the high price. So the best choice is to build a second class calibrating station for vacuum gauges by ourselves (Fig.l).

  20. Jet energy calibration in ATLAS

    CERN Document Server

    Schouten, Doug

    A correct energy calibration for jets is essential to the success of the ATLAS experi- ment. In this thesis I study a method for deriving an in situ jet energy calibration for the ATLAS detector. In particular, I show the applicability of the missing transverse energy projection fraction method. This method is shown to set the correct mean energy for jets. Pileup effects due to the high luminosities at ATLAS are also stud- ied. I study the correlations in lateral distributions of pileup energy, as well as the luminosity dependence of the in situ calibration metho

  1. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  2. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Reda, Ibrahim; Robinson, Justin

    2016-06-02

    This study addresses the effect of calibration methodologies on calibration responsivities and the resulting impact on radiometric measurements. The calibration responsivities used in this study are provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides outdoor calibration responsivity of pyranometers and pyrheliometers at a 45 degree solar zenith angle and responsivity as a function of solar zenith angle determined by clear-sky comparisons to reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison of the test radiometer under calibration to a reference radiometer of the same type. These different methods of calibration demonstrated 1percent to 2 percent differences in solar irradiance measurement. Analyzing these values will ultimately enable a reduction in radiometric measurement uncertainties and assist in developing consensus on a standard for calibration.

  3. Vacuum gage calibration system for 10 to the minus 8th power to 10 torr

    Science.gov (United States)

    Holanda, R.

    1969-01-01

    Calibration system consists of a gas source, a source pressure gage, source volume, transfer volume and test chamber, plus appropriate piping, valves and vacuum source. It has been modified to cover as broad a range as possible while still providing accuracy and convenience.

  4. Prognosis value of the active tumoral volume in {sup 18}F-F.D.G. for the esophagus cancer and influence of the tumor delimitation methodology; Valeur pronostique du volume tumoral actif en {sup 18}F-FDG pour le cancer de l'oesophage et influence de la methodologie de contourage de la tumeur

    Energy Technology Data Exchange (ETDEWEB)

    Hatt, M. [LaTIM Inserm U650, 29 - Brest (France); Cheze Le Rest, C. [CHU Morvan, departement de medecine nucleaire, 29 - Brest (France); Albarghach, M.N. [CHU Morvan, departement de radiotherapie, 29 - Brest (France)

    2010-07-01

    Purpose: compare the predictive value for survival and response to the treatment of the active tumor volume automatically measured on the PET with {sup 18}F-F.D.G. images by different methods to this one of S.U.V., in the esophagus cancer. Conclusions: Our results suggest that the tumor volume is a pertinent information of which prognosis value in the esophagus cancer is clearly superior to this one of S.U.V. (maximum or average), at the condition to be measured with accuracy, what Fuzzy locally adaptive Bayesian (F.L.A.B.) allows contrary to the thresholding methods. The predictive value of total glycolysis volume (T.G.V.) is still superior and it is less influenced by the method used, F.L.A.B. offering a better differentiation, for the different responses to the treatment or for survival. (N.C.)

  5. DART II documentation. Volume III. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    1979-05-23

    The DART II is a data acquisition system that can be used with air pollution monitoring equipment. This volume contains appendices that deal with the following topics: adjustment and calibration procedures (power supply adjustment procedure, ADC calibration procedure, analog multiplexer calibration procedure); mother board signature list; schematic diagrams; device specification sheets (microprocessor, asynchronous receiver/transmitter, analog-to-digital converter, arithmetic processing unit, 5-volt power supply, +- 15-volt power supply, 24-volt power supply, floppy disk formater/controller, random access static memory); ROM program listing; 6800 microprocessor instruction set, octal listing; and cable lists. (RR)

  6. A practical implementation of microphone free-field comparison calibration according to the standard IEC 61094-8

    DEFF Research Database (Denmark)

    Barrera Figueroa, Salvador; Torras Rosell, Antoni; Rasmussen, Knud;

    2012-01-01

    An international standard concerned with the calibration of microphones in a free field by comparison has recently been published. The standard contemplates two main calibration methodologies for determining the sensitivity of a microphone under test when compared against a reference microphone....... A third method, consisting of a combination of the sequential and simultaneous methodologies, has also been investigated. Though the application of time selective techniques is not discussed, the experimental results indicate the immunity to unwanted reflections in the sequential and combined approaches...

  7. VIIRS reflective solar bands on-orbit calibration and performance: a three-year update

    Science.gov (United States)

    Sun, Junqiang; Wang, Menghua

    2014-11-01

    The on-orbit calibration of the reflective solar bands (RSBs) of VIIRS and the result from the analysis of the up-to-date 3 years of mission data are presented. The VIIRS solar diffuser (SD) and lunar calibration methodology are discussed, and the calibration coefficients, called F-factors, for the RSBs are given for the latest reincarnation. The coefficients derived from the two calibrations are compared and the uncertainties of the calibrations are discussed. Numerous improvements are made, with the major improvement to the calibration result come mainly from the improved bidirectional reflectance factor (BRF) of the SD and the vignetting functions of both the SD screen and the sun-view screen. The very clean results, devoid of many previously known noises and artifacts, assures that VIIRS has performed well for the three years on orbit since launch, and in particular that the solar diffuser stability monitor (SDSM) is functioning essentially without flaws. The SD degradation, or H-factors, for most part shows the expected decline except for the surprising rise on day 830 lasting for 75 days signaling a new degradation phenomenon. Nevertheless the SDSM and the calibration methodology have successfully captured the SD degradation for RSB calibration. The overall improvement has the most significant and direct impact on the ocean color products which demands high accuracy from RSB observations.

  8. A practical implementation of microphone free-field comparison calibration according to the standard IEC 61094-8

    DEFF Research Database (Denmark)

    Barrera Figueroa, Salvador; Torras-Rosell, Antoni; Rasmussen, Knud

    2012-01-01

    An international standard concerned with the calibration of microphones in a free field by comparison has recently been published. The standard contemplates two main calibration methodologies for determining the sensitivity of a microphone under test when compared against a reference microphone...

  9. A practical implementation of microphone free-field comparison calibration according to the standard IEC 61094-8

    DEFF Research Database (Denmark)

    Barrera Figueroa, Salvador; Torras-Rosell, Antoni; Rasmussen, Knud;

    2012-01-01

    An international standard concerned with the calibration of microphones in a free field by comparison has recently been published. The standard contemplates two main calibration methodologies for determining the sensitivity of a microphone under test when compared against a reference microphone. ...

  10. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  11. The Dark Energy Survey Data Processing and Calibration System

    CERN Document Server

    Mohr, Joseph J; Bertin, Emmanuel; Daues, Gregory E; Desai, Shantanu; Gower, Michelle; Gruendl, Robert; Hanlon, William; Kuropatkin, Nikolay; Lin, Huan; Marriner, John; Petravick, Don; Sevilla, Ignacio; Swanson, Molly; Tomashek, Todd; Tucker, Douglas; Yanny, Brian

    2012-01-01

    The Dark Energy Survey (DES) is a 5000 deg2 grizY survey reaching characteristic photometric depths of 24th magnitude (10 sigma) and enabling accurate photometry and morphology of objects ten times fainter than in SDSS. Preparations for DES have included building a dedicated 3 deg2 CCD camera (DECam), upgrading the existing CTIO Blanco 4m telescope and developing a new high performance computing (HPC) enabled data management system (DESDM). The DESDM system will be used for processing, calibrating and serving the DES data. The total data volumes are high (~2PB), and so considerable effort has gone into designing an automated processing and quality control system. Special purpose image detrending and photometric calibration codes have been developed to meet the data quality requirements, while survey astrometric calibration, coaddition and cataloging rely on new extensions of the AstrOmatic codes which now include tools for PSF modeling, PSF homogenization, PSF corrected model fitting cataloging and joint mode...

  12. Coast guard STD calibration procedures

    National Research Council Canada - National Science Library

    Freeman, R.H; Krug, W.S

    1973-01-01

    This manual describes the procedures used by the Coast Guard Oceanographic UNIT (CGOU) to calibrate several Model 9040 STD systems, manufactured by Plessey Environmental Systems, currently in use within the Coast Guard...

  13. Calibration of "Babyline" RP instruments

    CERN Multimedia

    2015-01-01

      If you have old RP instrumentation of the “Babyline” type, as shown in the photo, please contact the Radiation Protection Group (Joffrey Germa, 73171) to have the instrument checked and calibrated. Thank you. Radiation Protection Group

  14. Astrid-2 EMMA Magnetic Calibration

    DEFF Research Database (Denmark)

    Merayo, José M.G.; Brauer, Peter; Risbo, Torben

    1998-01-01

    experiment built as a collaboration between the DTU, Department of Automation and the Department of Plasma Physics, The Alfvenlaboratory, Royal Institute of Technology (RIT), Stockholm. The final magnetic calibration of the Astrid-2 satellite was done at the Lovoe Magnetic Observatory under the Geological...... of the magnetometer readings in each position were related to the field magnitudes from the Observatory, and a least squares fit for the 9 magnetometer calibration parameters was performed (3 offsets, 3 scale values and 3 inter-axes angles). After corrections for the magnetometer digital-to-analogue converters...... fit calibration parameters. Owing to time shortage, we did not evaluate the temperature coefficients of the flight sensor calibration parameters. However, this was done for an identical flight spare magnetometer sensor at the magnetic coil facility belonging to the Technical University of Braunschweig...

  15. Field calibration of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Kristensen, L.; Jensen, G.; Hansen, A.; Kirkegaard, P.

    2001-01-01

    An outdoor calibration facility for cup anemometers, where the signals from 10 anemometers of which at least one is a reference can be recorded simultaneously, has been established. The results are discussed with special emphasis on the statistical significance of the calibration expressions. It is concluded that the method has the advantage that many anemometers can be calibrated accurately with a minimum of work and cost. The obvious disadvantage is that the calibration of a set of anemometers may take more than one month in order to have wind speeds covering a sufficiently large magnitude range in a wind direction sector where we can be sure that the instruments are exposed to identical, simultaneous wind flows. Another main conclusion is that statistical uncertainty must be carefully evaluated since the individual 10 minute wind-speed averages are not statistically independent. (au)

  16. Bayesian Calibration of Microsimulation Models.

    Science.gov (United States)

    Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E

    2009-12-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.

  17. Pressures Detector Calibration and Measurement

    CERN Document Server

    AUTHOR|(CDS)2156315

    2016-01-01

    This is report of my first and second projects (of 3) in NA61. I did data taking and analysis in order to do calibration of pressure detectors and verified it. I analyzed the data by ROOT software using the C ++ programming language. The first part of my project was determination of calibration factor of pressure sensors. Based on that result, I examined the relation between pressure drop, gas flow rate of in paper filter and its diameter.

  18. UVIS G280 Flux Calibration

    Science.gov (United States)

    Bushouse, Howard

    2009-07-01

    Flux calibration, image displacement, and spectral trace of the UVIS G280 grism will be established using observations of the HST flux standard start GD71. Accompanying direct exposures will provide the image displacement measurements and wavelength zeropoints for dispersed exposures. The calibrations will be obtained at the central position of each CCD chip and at the center of the UVIS field. No additional field-dependent variations will be derived.

  19. Infrasound Sensor Calibration and Response

    Science.gov (United States)

    2012-09-01

    functions with faster rise times. SUMMARY We have documented past work on the determination of the calibration constant of the LANL infrasound sensor...Monitoring Technologies 735 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated...National Laboratory ( LANL ) has operated an infrasound sensor calibration chamber that operates over a frequency range of 0.02 to 4 Hz. This chamber has

  20. Beam Imaging and Luminosity Calibration

    CERN Document Server

    Klute, Markus; Salfeld-Nebgen, Jakob

    2016-01-01

    We discuss a method to reconstruct two-dimensional proton bunch densities using vertex distributions accumulated during LHC beam-beam scans. The $x$-$y$ correlations in the beam shapes are studied and an alternative luminosity calibration technique is introduced. We demonstrate the method on simulated beam-beam scans and estimate the uncertainty on the luminosity calibration associated to the beam-shape reconstruction to be below 1\\%.

  1. Is your system calibrated? MRI gradient system calibration for pre-clinical, high-resolution imaging.

    Directory of Open Access Journals (Sweden)

    James O'Callaghan

    Full Text Available High-field, pre-clinical MRI systems are widely used to characterise tissue structure and volume in small animals, using high resolution imaging. Both applications rely heavily on the consistent, accurate calibration of imaging gradients, yet such calibrations are typically only performed during maintenance sessions by equipment manufacturers, and potentially with acceptance limits that are inadequate for phenotyping. To overcome this difficulty, we present a protocol for gradient calibration quality assurance testing, based on a 3D-printed, open source, structural phantom that can be customised to the dimensions of individual scanners and RF coils. In trials on a 9.4 T system, the gradient scaling errors were reduced by an order of magnitude, and displacements of greater than 100 µm, caused by gradient non-linearity, were corrected using a post-processing technique. The step-by-step protocol can be integrated into routine pre-clinical MRI quality assurance to measure and correct for these errors. We suggest that this type of quality assurance is essential for robust pre-clinical MRI experiments that rely on accurate imaging gradients, including small animal phenotyping and diffusion MR.

  2. Calibration of shaft alignment instruments

    Science.gov (United States)

    Hemming, Bjorn

    1998-09-01

    Correct shaft alignment is vital for most rotating machines. Several shaft alignment instruments, ranging form dial indicator based to laser based, are commercially available. At VTT Manufacturing Technology a device for calibration of shaft alignment instruments was developed during 1997. A feature of the developed device is the similarity to the typical use of shaft alignment instruments i.e. the rotation of two shafts during the calibration. The benefit of the rotation is that all errors of the shaft alignment instrument, for example the deformations of the suspension bars, are included. However, the rotation increases significantly the uncertainty of calibration because of errors in the suspension of the shafts in the developed device for calibration of shaft alignment instruments. Without rotation the uncertainty of calibration is 0.001 mm for the parallel offset scale and 0,003 mm/m for the angular scale. With rotation the uncertainty of calibration is 0.002 mm for the scale and 0.004 mm/m for the angular scale.

  3. Calibration and validation of DRAINMOD to model bioretention hydrology

    Science.gov (United States)

    Brown, R. A.; Skaggs, R. W.; Hunt, W. F.

    2013-04-01

    SummaryPrevious field studies have shown that the hydrologic performance of bioretention cells varies greatly because of factors such as underlying soil type, physiographic region, drainage configuration, surface storage volume, drainage area to bioretention surface area ratio, and media depth. To more accurately describe bioretention hydrologic response, a long-term hydrologic model that generates a water balance is needed. Some current bioretention models lack the ability to perform long-term simulations and others have never been calibrated from field monitored bioretention cells with underdrains. All peer-reviewed models lack the ability to simultaneously perform both of the following functions: (1) model an internal water storage (IWS) zone drainage configuration and (2) account for soil-water content using the soil-water characteristic curve. DRAINMOD, a widely-accepted agricultural drainage model, was used to simulate the hydrologic response of runoff entering a bioretention cell. The concepts of water movement in bioretention cells are very similar to those of agricultural fields with drainage pipes, so many bioretention design specifications corresponded directly to DRAINMOD inputs. Detailed hydrologic measurements were collected from two bioretention field sites in Nashville and Rocky Mount, North Carolina, to calibrate and test the model. Each field site had two sets of bioretention cells with varying media depths, media types, drainage configurations, underlying soil types, and surface storage volumes. After 12 months, one of these characteristics was altered - surface storage volume at Nashville and IWS zone depth at Rocky Mount. At Nashville, during the second year (post-repair period), the Nash-Sutcliffe coefficients for drainage and exfiltration/evapotranspiration (ET) both exceeded 0.8 during the calibration and validation periods. During the first year (pre-repair period), the Nash-Sutcliffe coefficients for drainage, overflow, and exfiltration

  4. Effective radiation attenuation calibration for breast density: compression thickness influences and correction

    Directory of Open Access Journals (Sweden)

    Thomas Jerry A

    2010-11-01

    Full Text Available Abstract Background Calibrating mammograms to produce a standardized breast density measurement for breast cancer risk analysis requires an accurate spatial measure of the compressed breast thickness. Thickness inaccuracies due to the nominal system readout value and compression paddle orientation induce unacceptable errors in the calibration. Method A thickness correction was developed and evaluated using a fully specified two-component surrogate breast model. A previously developed calibration approach based on effective radiation attenuation coefficient measurements was used in the analysis. Water and oil were used to construct phantoms to replicate the deformable properties of the breast. Phantoms consisting of measured proportions of water and oil were used to estimate calibration errors without correction, evaluate the thickness correction, and investigate the reproducibility of the various calibration representations under compression thickness variations. Results The average thickness uncertainty due to compression paddle warp was characterized to within 0.5 mm. The relative calibration error was reduced to 7% from 48-68% with the correction. The normalized effective radiation attenuation coefficient (planar representation was reproducible under intra-sample compression thickness variations compared with calibrated volume measures. Conclusion Incorporating this thickness correction into the rigid breast tissue equivalent calibration method should improve the calibration accuracy of mammograms for risk assessments using the reproducible planar calibration measure.

  5. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  6. Scenario development methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Eng, T. [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hudson, J. [Rock Engineering Consultants, Welwyn Garden City, Herts (United Kingdom); Stephansson, O. [Royal Inst. of Tech., Stockholm (Sweden). Div. of Engineering Geology; Skagius, K.; Wiborgh, M. [Kemakta, Stockholm (Sweden)

    1994-11-01

    In the period 1981-1994, SKB has studied several methodologies to systematize and visualize all the features, events and processes (FEPs) that can influence a repository for radioactive waste in the future. All the work performed is based on the terminology and basic findings in the joint SKI/SKB work on scenario development presented in the SKB Technical Report 89-35. The methodologies studied are (a) Event tree analysis, (b) Influence diagrams and (c) Rock Engineering Systems (RES) matrices. Each one of the methodologies is explained in this report as well as examples of applications. One chapter is devoted to a comparison between the two most promising methodologies, namely: Influence diagrams and the RES methodology. In conclusion a combination of parts of the Influence diagram and the RES methodology is likely to be a promising approach. 26 refs.

  7. LANGUAGE POLICY AND METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Antony J. Liddicoat

    2004-06-01

    Full Text Available The implementation of a language policy is crucially associated with questions of methodology. This paper explores approaches to language policy, approaches to methodology and the impact that these have on language teaching practice. Language policies can influence decisions about teaching methodologies either directly, by making explicit recommendations about the methods to be used in classroom practice, or indirectly, through the conceptualisation of language leaming which underlies the policy. It can be argued that all language policies have the potential to influence teaching methodologies indirectly and that those policies which have explicit recommendations about methodology are actually functioning of two levels. This allows for the possibility of conflict between the direct and indirect dimensions of the policy which results from an inconsistency between the explicitly recommended methodology and the underlying conceptualisation of language teaching and learning which informs the policy.

  8. Calibration system for measuring the radon flux density.

    Science.gov (United States)

    Onishchenko, A; Zhukovsky, M; Bastrikov, V

    2015-06-01

    The measurement of radon flux from soil surface is the useful tool for the assessment of radon-prone areas and monitoring of radon releases from uranium mining and milling residues. The accumulation chambers with hollow headspace and chambers with activated charcoal are the most used devices for these purposes. Systematic errors of the measurements strongly depend on the geometry of the chamber and diffusion coefficient of the radon in soil. The calibration system for the attestation of devices for radon flux measurements was constructed. The calibration measurements of accumulation chambers and chambers with activated charcoal were conducted. The good agreement between the results of 2D modelling of radon flux and measurements results was observed. It was demonstrated that reliable measurements of radon flux can be obtained by chambers with activated charcoal (equivalent volume ~75 l) or by accumulation chambers with hollow headspace of ~7-10 l and volume/surface ratio (height) of >15 cm.

  9. Conceptual and methodological issues in epidemiology: An overview.

    Science.gov (United States)

    Broadbent, Alex

    2011-10-01

    In 2010 a series of workshops on philosophical and methodological issues in epidemiology was held at the University of Cambridge. The papers in this volume arise from those workshops. This paper represents an effort to identify, in broad brush, some of the major conceptual and methodological issues in epidemiology, which form the basis of an emerging focus on the philosophy of epidemiology.

  10. Open verification methodology cookbook

    CERN Document Server

    Glasser, Mark

    2009-01-01

    Functional verification is an art as much as a science. It requires not only creativity and cunning, but also a clear methodology to approach the problem. The Open Verification Methodology (OVM) is a leading-edge methodology for verifying designs at multiple levels of abstraction. It brings together ideas from electrical, systems, and software engineering to provide a complete methodology for verifying large scale System-on-Chip (SoC) designs. OVM defines an approach for developing testbench architectures so they are modular, configurable, and reusable. This book is designed to help both novic

  11. Calibration of the Capintec CRC-712M dose calibrator for (18)F.

    Science.gov (United States)

    Mo, L; Reinhard, M I; Davies, J B; Alexiev, D; Baldock, C

    2006-04-01

    Primary standardisation was performed on a solution of (18)F using the 4pibeta-gamma coincidence counting efficiency-tracing extrapolation method with (60)Co used as a tracer nuclide. The result was used to calibrate the ANSTO secondary standard ionisation chamber which is used to disseminate Australian activity standards for gamma emitters. Using the secondary activity standard for (18)F, the Capintec CRC-712M dose calibrator at the Australian National Medical Cyclotron (NMC) Positron Emission Tomography (PET) Quality Control (QC) Section was calibrated. The dial setting number recommended by the manufacturer for the measurement of the activity of (18)F is 439. In this work, the dial setting numbers for the activity measurement of the solution of (18)F in Wheaton vials were experimentally determined to be 443+/-12, 446+/-12, 459+/-11, 473+/-15 for 0.1, 1, 4.5 and 9ml solution volumes, respectively. The uncertainties given above are expanded uncertainties (k=2) giving an estimated level of confidence of 95%. The activities determined using the manufacturer recommended setting number 439 are 0.8%, 1.4%, 4.0% and 6.5% higher than the standardised activities, respectively. It is recommended that a single dial setting number of 459 determined for 4.5ml is used for 0.1-9ml solution in Wheaton vials in order to simplify the operation procedure. With this setting the expended uncertainty (k=2) in the activity readout from the Capintec dose calibrator would be less than 6.2%.

  12. On the calibration of polarimetric Thomson scattering by Raman polarimetry

    Science.gov (United States)

    Giudicotti, L.; Pasqualotto, R.

    2015-12-01

    Polarimetric Thomson scattering (TS) is an alternative method for the analysis of Thomson scattering spectra in which the plasma temperature T e is determined from the depolarization of the TS radiation. This is a relativistic effect and therefore the technique is suitable only for very hot plasmas (T e  >  10 keV) such as those of ITER. The practical implementation of polarimetric TS requires a method to calibrate the polarimetric response of the collection optics carrying the TS light to the detection system, and in particular to measure the additional depolarization of the TS radiation introduced by the plasma-exposed first mirror. Rotational Raman scattering of laser light from diatomic gases such as H2, D2, N2 and O2 can provide a radiation source of predictable intensity and polarization state from a well-defined volume inside the vacuum vessel and is therefore suitable for these calibrations. In this paper we discuss Raman polarimetry as a technique for the calibration of a hypothetical polarimetric TS system operating in the same conditions of the ITER core TS system and suggest two calibration methods for the measurement of the additional depolarization introduced by the plasma-exposed first mirror, and in general for calibrating the polarimetric response of the detection system.

  13. Top-down methodology for rainfall-runoff modelling and evaluation of hydrological extremes

    Science.gov (United States)

    Willems, Patrick

    2014-05-01

    A top-down methodology is presented for implementation and calibration of a lumped conceptual catchment rainfall-runoff model that aims to produce high model performance (depending on the quality and availability of data) in terms of rainfall-runoff discharges for the full range from low to high discharges, including the peak and low flow extremes. The model is to be used to support water engineering applications, which most often deal with high and low flows as well as cumulative runoff volumes. With this application in mind, the paper wants to contribute to the above-mentioned problems and advancements on model evaluation, model-structure selection, the overparameterization problem and the long time the modeller needs to invest or the difficulties one encounters when building and calibrating a lumped conceptual model for a river catchment. The methodology is an empirical and step-wise technique that includes examination of the various model components step by step through a data-based analysis of response characteristics. The approach starts from a generalized lumped conceptual model structure. In this structure, only the general components of a lumped conceptual model, such as the existence of storage and routing elements, and their inter-links, are pre-defined. The detailed specifications on model equations and parameters are supported by advanced time series analysis of the empirical response between the rainfall and evapotranspiration inputs and the river flow output. Subresponses are separated and submodel components and related subsets of parameters are calibrated as independently as possible. At the same time, the model-structure identification process aims to reach parsimonious submodel-structures, and accounts for the serial dependency of runoff values, which typically is higher for low flows than for high flows. It also accounts for the heteroscedasticity and dependency of model residuals when evaluating the model performance. It is shown that this step

  14. A Kinematic Calibration Process for Flight Robotic Arms

    Science.gov (United States)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.

  15. Drift-insensitive distributed calibration of probe microscope scanner in nanometer range: Virtual mode

    Science.gov (United States)

    Lapshin, Rostislav V.

    2016-08-01

    A method of distributed calibration of a probe microscope scanner is suggested. The main idea consists in a search for a net of local calibration coefficients (LCCs) in the process of automatic measurement of a standard surface, whereby each point of the movement space of the scanner can be characterized by a unique set of scale factors. Feature-oriented scanning (FOS) methodology is used as a basis for implementation of the distributed calibration permitting to exclude in situ the negative influence of thermal drift, creep and hysteresis on the obtained results. Possessing the calibration database enables correcting in one procedure all the spatial systematic distortions caused by nonlinearity, nonorthogonality and spurious crosstalk couplings of the microscope scanner piezomanipulators. To provide high precision of spatial measurements in nanometer range, the calibration is carried out using natural standards - constants of crystal lattice. One of the useful modes of the developed calibration method is a virtual mode. In the virtual mode, instead of measurement of a real surface of the standard, the calibration program makes a surface image "measurement" of the standard, which was obtained earlier using conventional raster scanning. The application of the virtual mode permits simulation of the calibration process and detail analysis of raster distortions occurring in both conventional and counter surface scanning. Moreover, the mode allows to estimate the thermal drift and the creep velocities acting while surface scanning. Virtual calibration makes possible automatic characterization of a surface by the method of scanning probe microscopy (SPM).

  16. The KLOE Online Calibration System

    Institute of Scientific and Technical Information of China (English)

    E.Pasqualucci

    2001-01-01

    Based on all the features of the KLOE online software,the online calibration system performs current calibration quality checking in real time and starts automatically new calibration procedures when needed.Acalibration manager process controls the system,implementing the interface to the online system,receiving information from the run control and translating its state transitions to a separate state machine.It acts as a " calibration run controller"and performs failure recovery when requested by a set of process checkers.The core of the system is a multi-threaded OO histogram server that receives histogramming commands by remote processes and operates on local ROOT histograms.A client library and C,fortran and C++ application interface libraries allow the user to connect and define his own histogram or read histograms owned by others using an bool-like interface.Several calibration processes running in parallel in a destributed,multiplatform environment can fill the same histograms,allowing fast external information check.A monitor thread allow remote browsing for visual inspection,Pre-filtered data are read in nonprivileged spy mode from the data acquisition system via the Kloe Integrated Dataflow,privileged spy mode from the data acquisiton system via the Kole Integrated Dataflow.The main characteristics of the system are presented.

  17. Data Centric Development Methodology

    Science.gov (United States)

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  18. Creativity in phenomenological methodology

    DEFF Research Database (Denmark)

    Dreyer, Pia; Martinsen, Bente; Norlyk, Annelise

    2014-01-01

    on the methodologies of van Manen, Dahlberg, Lindseth & Norberg, the aim of this paper is to argue that the increased focus on creativity and arts in research methodology is valuable to gain a deeper insight into lived experiences. We illustrate this point through examples from empirical nursing studies, and discuss...

  19. The Methodology of Magpies

    Science.gov (United States)

    Carter, Susan

    2014-01-01

    Arts/Humanities researchers frequently do not explain methodology overtly; instead, they "perform" it through their use of language, textual and historic cross-reference, and theory. Here, methodologies from literary studies are shown to add to Higher Education (HE) an exegetical and critically pluralist approach. This includes…

  20. Menopause and Methodological Doubt

    Science.gov (United States)

    Spence, Sheila

    2005-01-01

    Menopause and methodological doubt begins by making a tongue-in-cheek comparison between Descartes' methodological doubt and the self-doubt that can arise around menopause. A hermeneutic approach is taken in which Cartesian dualism and its implications for the way women are viewed in society are examined, both through the experiences of women…

  1. The Methodology of Magpies

    Science.gov (United States)

    Carter, Susan

    2014-01-01

    Arts/Humanities researchers frequently do not explain methodology overtly; instead, they "perform" it through their use of language, textual and historic cross-reference, and theory. Here, methodologies from literary studies are shown to add to Higher Education (HE) an exegetical and critically pluralist approach. This includes…

  2. VEM: Virtual Enterprise Methodology

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a virtual enterprise methodology (VEM) that outlines activities to consider when setting up and managing virtual enterprises (VEs). As a methodology the VEM helps companies to ask the right questions when preparing for and setting up an enterprise network, which works...

  3. VEM: Virtual Enterprise Methodology

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a virtual enterprise methodology (VEM) that outlines activities to consider when setting up and managing virtual enterprises (VEs). As a methodology the VEM helps companies to ask the right questions when preparing for and setting up an enterprise network, which works as a b...

  4. Menopause and Methodological Doubt

    Science.gov (United States)

    Spence, Sheila

    2005-01-01

    Menopause and methodological doubt begins by making a tongue-in-cheek comparison between Descartes' methodological doubt and the self-doubt that can arise around menopause. A hermeneutic approach is taken in which Cartesian dualism and its implications for the way women are viewed in society are examined, both through the experiences of women…

  5. Rapid Dialogue Prototyping Methodology

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Sojka, P.; Rajman, M.; Kopecek, I.; Melichar, M.; Pala, K.

    2004-01-01

    This paper is about the automated production of dialogue models. The goal is to propose and validate a methodology that allows the production of finalized dialogue models (i.e. dialogue models specific for given applications) in a few hours. The solution we propose for such a methodology, called the

  6. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board.

    Science.gov (United States)

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-03-17

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.

  7. The RAAF Logistics Study. Volume 4,

    Science.gov (United States)

    1986-10-01

    Use of Issue-Based Root Definitions Application of Soft Systems Methodology to 27 Information Systems Analysis Conclusion 30 LIST OF ABBREVIATIONS 58 k...Management Control Systems’, Journal of Applied Systems Analysis, Volume 6, 1979, pages 51 to 67. 5. The soft systems methodology was developed to tackle...the soft systems methodology has many advantages whi-h recmmenrl it to this type of study area, it does not mcklel the timo ev, lut i, n :-f a system

  8. On chromatic and geometrical calibration

    DEFF Research Database (Denmark)

    Folm-Hansen, Jørgen

    1999-01-01

    of non-uniformity of the illumination of the image plane. Only the image deforming aberrations and the non-uniformity of illumination are included in the calibration models. The topics of the pinhole camera model and the extension to the Direct Linear Transform (DLT) are described. It is shown how......The main subject of the present thesis is different methods for the geometrical and chromatic calibration of cameras in various environments. For the monochromatic issues of the calibration we present the acquisition of monochrome images, the classic monochrome aberrations and the various sources...... the DLT can be extended with non-linear models of the common lens aberrations/errors some of them caused by manufacturing defects like decentering and thin prism distortion. The relation between a warping and the non-linear defects are shown. The issue of making a good resampling of an image by using...

  9. Reduced Ambiguity Calibration for LOFAR

    CERN Document Server

    Yatawatta, Sarod

    2012-01-01

    Interferometric calibration always yields non unique solutions. It is therefore essential to remove these ambiguities before the solutions could be used in any further modeling of the sky, the instrument or propagation effects such as the ionosphere. We present a method for LOFAR calibration which does not yield a unitary ambiguity, especially under ionospheric distortions. We also present exact ambiguities we get in our solutions, in closed form. Casting this as an optimization problem, we also present conditions for this approach to work. The proposed method enables us to use the solutions obtained via calibration for further modeling of instrumental and propagation effects. We provide extensive simulation results on the performance of our method. Moreover, we also give cases where due to degeneracy, this method fails to perform as expected and in such cases, we suggest exploiting diversity in time, space and frequency.

  10. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  11. Flexible calibration procedure for fringe projection profilometry

    OpenAIRE

    Vargas, Javier; Quiroga Mellado, Juan Antonio; Terrón López, María José

    2007-01-01

    A novel calibration method for whole field three-dimensional shape measurement by means of fringe projection is presented. Standard calibration techniques, polynomial-and model-based, have practical limitations such as the difficulty of measuring large fields of view, the need to use precise z stages, and bad calibration results due to inaccurate calibration points. The proposed calibration procedure is a mixture of the two main standard techniques, sharing their benefits and avoiding their m...

  12. A Study of IR Loss Correction Methodologies for Commercially Available Pyranometers

    Energy Technology Data Exchange (ETDEWEB)

    Long, Chuck; Andreas, Afshin; Augustine, John; Dooraghi, Mike; Habte, Aron; Hall, Emiel; Kutchenreiter, Mark; McComiskey, Allison; Reda, Ibrahim; Sengupta, Manajit

    2017-03-24

    This presentation provides a high-level overview of a study of IR Loss Connection Methodologies for Commercially Available Pyranometers. The IR Loss Corrections Study is investigating how various correction methodologies work for several makes and models of commercially available pyranometers in common use, both when operated in ventilators with DC fans and without ventilators, as when they are typically calibrated.

  13. Comparison and uncertainty evaluation of different calibration protocols and ionization chambers for low-energy surface brachytherapy dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Candela-Juan, C., E-mail: ccanjuan@gmail.com [Radiation Oncology Department, La Fe University and Polytechnic Hospital, Valencia 46026 (Spain); Vijande, J. [Department of Atomic, Molecular, and Nuclear Physics, University of Valencia, Burjassot 46100, Spain and Instituto de Física Corpuscular (UV-CSIC), Paterna 46980 (Spain); García-Martínez, T. [Radiation Oncology Department, Hospital La Ribera, Alzira 46600 (Spain); Niatsetski, Y.; Nauta, G.; Schuurman, J. [Elekta Brachytherapy, Veenendaal 3905 TH (Netherlands); Ouhib, Z. [Radiation Oncology Department, Lynn Regional Cancer Center, Boca Raton Community Hospital, Boca Raton, Florida 33486 (United States); Ballester, F. [Department of Atomic, Molecular, and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Perez-Calatayud, J. [Radiation Oncology Department, La Fe University and Polytechnic Hospital, Valencia 46026, Spain and Department of Radiotherapy, Clínica Benidorm, Benidorm 03501 (Spain)

    2015-08-15

    Purpose: A surface electronic brachytherapy (EBT) device is in fact an x-ray source collimated with specific applicators. Low-energy (<100 kVp) x-ray beam dosimetry faces several challenges that need to be addressed. A number of calibration protocols have been published for x-ray beam dosimetry. The media in which measurements are performed are the fundamental difference between them. The aim of this study was to evaluate the surface dose rate of a low-energy x-ray source with small field applicators using different calibration standards and different small-volume ionization chambers, comparing the values and uncertainties of each methodology. Methods: The surface dose rate of the EBT unit Esteya (Elekta Brachytherapy, The Netherlands), a 69.5 kVp x-ray source with applicators of 10, 15, 20, 25, and 30 mm diameter, was evaluated using the AAPM TG-61 (based on air kerma) and International Atomic Energy Agency (IAEA) TRS-398 (based on absorbed dose to water) dosimetry protocols for low-energy photon beams. A plane parallel T34013 ionization chamber (PTW Freiburg, Germany) calibrated in terms of both absorbed dose to water and air kerma was used to compare the two dosimetry protocols. Another PTW chamber of the same model was used to evaluate the reproducibility between these chambers. Measurements were also performed with two different Exradin A20 (Standard Imaging, Inc., Middleton, WI) chambers calibrated in terms of air kerma. Results: Differences between surface dose rates measured in air and in water using the T34013 chamber range from 1.6% to 3.3%. No field size dependence has been observed. Differences are below 3.7% when measurements with the A20 and the T34013 chambers calibrated in air are compared. Estimated uncertainty (with coverage factor k = 1) for the T34013 chamber calibrated in water is 2.2%–2.4%, whereas it increases to 2.5% and 2.7% for the A20 and T34013 chambers calibrated in air, respectively. The output factors, measured with the PTW chambers

  14. Tank calibration; Arqueacao de tanques

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Ana [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2003-07-01

    This work relates the analysis of the norms ISO (International Organization for Standardization) for calibration of vertical cylindrical tanks used in fiscal measurement, established on Joint Regulation no 1 of June 19, 2000 between the ANP (National Agency of Petroleum) and the INMETRO (National Institute of Metrology, Normalization and Industrial Quality). In this work a comparison between norms ISO and norms published by the API (American Petroleum Institute) and the IP (Institute of Petroleum) up to 2001 was made. It was concluded that norms ISO are wider than norms API, IP, and INMETRO methods in the calibration of vertical cylindrical tanks. (author)

  15. Instrument Calibration and Certification Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Davis, R. Wesley [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-31

    The Amptec 640SL-2 is a 4-wire Kelvin failsafe resistance meter, designed to reliably use very low-test currents for its resistance measurements. The 640SL-1 is a 2-wire version, designed to support customers using the Reynolds Industries type 311 connector. For both versions, a passive (analog) dual function DC Milliameter/Voltmeter allows the user to verify the actual 640SL output current level and the open circuit voltage on the test leads. This procedure includes tests of essential performance parameters. Any malfunction noticed during calibration, whether specifically tested for or not, shall be corrected before calibration continues or is completed.

  16. Performance standard for dose Calibrator

    CERN Document Server

    Darmawati, S

    2002-01-01

    Dose calibrator is an instrument used in hospitals to determine the activity of radionuclide for nuclear medicine purposes. International Electrotechnical Commission (IEC) has published IEC 1303:1994 standard that can be used as guidance to test the performance of the instrument. This paper briefly describes content of the document,as well as explains the assessment that had been carried out to test the instrument accuracy in Indonesia through intercomparison measurement.Its is suggested that hospitals acquire a medical physicist to perform the test for its dose calibrator. The need for performance standard in the form of Indonesia Standard is also touched.

  17. An attempt to calibrate the UHF strato-tropospheric radar at Arecibo using NexRad radar and disdrometer data

    Directory of Open Access Journals (Sweden)

    P. Kafando

    2004-12-01

    Full Text Available The goal of this paper is to present a methodology to calibrate the reflectivity of the UHF Strato-Tropospheric (ST radar located at NAIC in Puerto Rico. The UHF lower relevant altitude is at 5.9km, the melting layer being at around 4.8km. The data used for the calibration came from the observations of clouds, carried out with Strato-Tropospheric dual-wavelength (UHF and VHF radars and a disdrometer; those instruments being located on the NAIC site in Arecibo, Puerto Rico. The National Weather Service operates other instruments like the radiosondes and the NexRad Radar in other sites.

    The proposed method proceeds in two steps. The first consists of the comparison between the NexRad reflectivity and the reflectivity computed from the drop size distributions measured by the disdrometer for one day with a noticeable rainfall rate. In spite of the distance of both instruments, the agreement between the reflectivities of both instruments is enough good to be used as a reference for the UHF ST radar. The errors relative at each data set is found to be 2.75dB for the disdrometer and 4dB for the NexRad radar, following the approach of Hocking et al. (2001. The inadequacy between the two sampled volume is an important contribution in the errors.

    The second step consists of the comparison between the NexRad radar reflectivity and the UHF non-calibrated reflectivity at the 4 altitudes of common observations during one event on 15 October 1998. Similar features are observed and a coefficient is deduced. An offset around 4.7dB is observed and the correlation factor lies between 0.628 and 0.730. According to the errors of the data sets, the precision on the calibration is of the order of 2dB. This method works only when there are precipitation hydrometeors above the NAIC site. However, the result of the calibration could be applied to other data obtained during the campaign, the only

  18. The Calibration of a Low-Frequency Calibrating System

    Science.gov (United States)

    1952-08-20

    to measure the pressure changes because such changes were too small to be measured with a mercury manometer or conventional gage, 4.222 Data The data...volume displacement of the water If the chamber is infinitely stiff, all -the volume displacement will be passed through the mercury manometer , causing a

  19. Novel calibration method for structured-light system with an out-of-focus projector.

    Science.gov (United States)

    Li, Beiwen; Karpinsky, Nikolaus; Zhang, Song

    2014-06-01

    A structured-light system with a binary defocusing technique has the potential to have more extensive application due to its high speeds, gamma-calibration-free nature, and lack of rigid synchronization requirements between the camera and projector. However, the existing calibration methods fail to achieve high accuracy for a structured-light system with an out-of-focus projector. This paper proposes a method that can accurately calibrate a structured-light system even when the projector is not in focus, making it possible for high-accuracy and high-speed measurement with the binary defocusing method. Experiments demonstrate that our calibration approach performs consistently under different defocusing degrees, and a root-mean-square error of about 73 μm can be achieved with a calibration volume of 150(H) mm×250(W) mm×200(D)mm.

  20. Single Image Camera Calibration in Close Range Photogrammetry for Solder Joint Analysis

    Science.gov (United States)

    Heinemann, D.; Knabner, S.; Baumgarten, D.

    2016-06-01

    Printed Circuit Boards (PCB) play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  1. DART II documentation. Volume III. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    1979-10-01

    The DART II is a remote, interactive, microprocessor-based data acquistion system suitable for use with air monitors. This volume of DART II documentation contains the following appendixes: adjustment and calibration procedures; mother board signature list; schematic diagrams; device specification sheets; ROM program listing; 6800 microprocessor instruction list, octal listing; and cable lists. (RWR)

  2. Practical intraoperative stereo camera calibration.

    Science.gov (United States)

    Pratt, Philip; Bergeles, Christos; Darzi, Ara; Yang, Guang-Zhong

    2014-01-01

    Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.

  3. Scalar Calibration of Vector Magnetometers

    DEFF Research Database (Denmark)

    Merayo, José M.G.; Brauer, Peter; Primdahl, Fritz;

    2000-01-01

    The calibration parameters of a vector magnetometer are estimated only by the use of a scalar reference magnetometer. The method presented in this paper differs from those previously reported in its linearized parametrization. This allows the determination of three offsets or signals in the absence...

  4. Laboratory panel and radiometer calibration

    CSIR Research Space (South Africa)

    Deadman, AJ

    2011-07-01

    Full Text Available AND RADIOMETER CALIBRATION A.J Deadmana, I.D Behnerta, N.P Foxa, D. Griffithb aNational Physical Laboratory (NPL), United Kingdom bCouncil for Scientific and Industrial Research (CSIR), South Africa ABSTRACT This paper presents the results...

  5. CALIBRATION OF THE INFRARED OPTOMETER

    Science.gov (United States)

    An infrared optometer for measuring the absolute status of accommodation is subject to a constant error not associated with chromatic aberration or...on optometer accuracy as long as the pupil does not vignette the optometer beam. A modification is described for calibrating the infrared optometer ...for an individual subject without using trial lenses or a subjective optometer . (Author)

  6. Measurement System and Calibration report

    DEFF Research Database (Denmark)

    Kock, Carsten Weber; Vesth, Allan

    This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...

  7. Measurement System and Calibration report

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...

  8. Measurement System and Calibration report

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Villanueva, Héctor

    This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this repor...

  9. Design Methodology - Design Synthesis

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup

    2003-01-01

    ABSTRACT Design Methodology shall be seen as our understanding of how to design; it is an early (emerging late 60ies) and original articulation of teachable and learnable methodics. The insight is based upon two sources: the nature of the designed artefacts and the nature of human designing. Today...... Design Methodology is part of our practice and our knowledge about designing, and it has been strongly supported by the establishing and work of a design research community. The aim of this article is to broaden the reader¿s view of designing and Design Methodology. This is done by sketching...... the development of Design Methodology through time and sketching some important approaches and methods. The development is mainly forced by changing industrial condition, by the growth of IT support for designing, but also by the growth of insight into designing created by design researchers....

  10. Transparent Guideline Methodology Needed

    DEFF Research Database (Denmark)

    Lidal, Ingeborg; Norén, Camilla; Mäkelä, Marjukka

    2013-01-01

    Group.2 Similar criteria for guideline quality have been suggested elsewhere.3 Our conclusion was that this much needed guideline is currently unclear about several aspects of the methodology used in developing the recommendations. This means potential users cannot be certain that the recommendations...... are based on best currently available evidence. Our concerns are in two main categories: the rigor of development, including methodology of searching, evaluating, and combining the evidence; and editorial independence, including funding and possible conflicts of interest....

  11. Timing calibration and spectral cleaning of LOFAR time series data

    Science.gov (United States)

    Corstanje, A.; Buitink, S.; Enriquez, J. E.; Falcke, H.; Hörandel, J. R.; Krause, M.; Nelles, A.; Rachen, J. P.; Schellart, P.; Scholten, O.; ter Veen, S.; Thoudam, S.; Trinh, T. N. G.

    2016-05-01

    We describe a method for spectral cleaning and timing calibration of short time series data of the voltage in individual radio interferometer receivers. It makes use of phase differences in fast Fourier transform (FFT) spectra across antenna pairs. For strong, localized terrestrial sources these are stable over time, while being approximately uniform-random for a sum over many sources or for noise. Using only milliseconds-long datasets, the method finds the strongest interfering transmitters, a first-order solution for relative timing calibrations, and faulty data channels. No knowledge of gain response or quiescent noise levels of the receivers is required. With relatively small data volumes, this approach is suitable for use in an online system monitoring setup for interferometric arrays. We have applied the method to our cosmic-ray data collection, a collection of measurements of short pulses from extensive air showers, recorded by the LOFAR radio telescope. Per air shower, we have collected 2 ms of raw time series data for each receiver. The spectral cleaning has a calculated optimal sensitivity corresponding to a power signal-to-noise ratio of 0.08 (or -11 dB) in a spectral window of 25 kHz, for 2 ms of data in 48 antennas. This is well sufficient for our application. Timing calibration across individual antenna pairs has been performed at 0.4 ns precision; for calibration of signal clocks across stations of 48 antennas the precision is 0.1 ns. Monitoring differences in timing calibration per antenna pair over the course of the period 2011 to 2015 shows a precision of 0.08 ns, which is useful for monitoring and correcting drifts in signal path synchronizations. A cross-check method for timing calibration is presented, using a pulse transmitter carried by a drone flying over the array. Timing precision is similar, 0.3 ns, but is limited by transmitter position measurements, while requiring dedicated flights.

  12. Calibrating Alonso's General Theory of Movement: the Case of Inter-Provincial Migration Flows in Canada

    OpenAIRE

    J Ledent

    1980-01-01

    First, it is shown that Alonso's general theory of movement relies on a standard doubly-constrained spatial interaction model which subsumes the usual gravity and entropy-derived formulations. Such a finding then suggests the use of a biproportional adjustment method (RAS method) to adequately estimate the systemic variables specified in the underlying model. This eventually leads to the development of a complete and precise methodology for calibrating the Alonso model. This methodology is il...

  13. Calibration of Diamond As a Raman Spectroscopy Pressure Sensor

    Science.gov (United States)

    Ono, S.

    2014-12-01

    In high pressures and high temperatures, the equations of state of reference materials, such as gold, platinum, and sodium chloride, have usually been used for the precise determination of the sample pressure. However, it is difficult to use this technique in laboratory-based experiments, because the synchrotron radiation source is often required. Although the fluorescence of ruby has been commonly used as the pressure sensor in previous laboratory-based experiments, it is impracticable at high temperatures. It is known that the first-order Raman mode of diamond anvil has been considered as a strong candidate because its Raman signal is intense and the diamond is always used as the anvil material. It is the purpose of this study to present the dependences of pressure and temperature on the Raman shift at the culet face of the diamond anvil.Gold powder, which was mixed with NaCl powder, was used as the pressure reference. The high-pressure and high-temperature experiments were performed using a hydrothermal diamond anvil cell (HTDAC). The sample was probed using angle-dispersive X-ray diffraction and Raman spectrometer system, located at the synchrotron beam line, at the BL10XU of SPring-8. The pressure was determined from the unit cell volume of gold using the equation of state for gold. The pressure and temperature dependences of the Raman shift were investigated [1]. The difference between our and previous studies increased rapidly with increasing pressure at pressures above 50 GPa, which is a fatal uncertainty for the pressure calibration. One possible explanation for this inconsistency is an influence of the stress condition in the sample chamber, because a significant deviatoric stress is accumulated during compression. The stress condition of the DAC experiment on the generated pressure is complicated because of some factors (e.g., the crystallographic orientation, design of the anvil, size of the culet, pressure transmitting medium, gasket material, and

  14. An estimate of global glacier volume

    Directory of Open Access Journals (Sweden)

    A. Grinsted

    2013-01-01

    Full Text Available I assess the feasibility of using multivariate scaling relationships to estimate glacier volume from glacier inventory data. Scaling laws are calibrated against volume observations optimized for the specific purpose of estimating total global glacier ice volume. I find that adjustments for continentality and elevation range improve skill of area–volume scaling. These scaling relationships are applied to each record in the Randolph Glacier Inventory, which is the first globally complete inventory of glaciers and ice caps. I estimate that the total volume of all glaciers in the world is 0.35 ± 0.07 m sea level equivalent, including ice sheet peripheral glaciers. This is substantially less than a recent state-of-the-art estimate. Area–volume scaling bias issues for large ice masses, and incomplete inventory data are offered as explanations for the difference.

  15. Radiation calibration for LWIR Hyperspectral Imager Spectrometer

    Science.gov (United States)

    Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong

    2014-11-01

    The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.

  16. Meta analysis a guide to calibrating and combining statistical evidence

    CERN Document Server

    Kulinskaya, Elena; Staudte, Robert G

    2008-01-01

    Meta Analysis: A Guide to Calibrating and Combining Statistical Evidence acts as a source of basic methods for scientists wanting to combine evidence from different experiments. The authors aim to promote a deeper understanding of the notion of statistical evidence.The book is comprised of two parts - The Handbook, and The Theory. The Handbook is a guide for combining and interpreting experimental evidence to solve standard statistical problems. This section allows someone with a rudimentary knowledge in general statistics to apply the methods. The Theory provides the motivation, theory and results of simulation experiments to justify the methodology.This is a coherent introduction to the statistical concepts required to understand the authors' thesis that evidence in a test statistic can often be calibrated when transformed to the right scale.

  17. Earth observation sensor calibration using a global instrumented and automated network of test sites (GIANTS)

    Science.gov (United States)

    Teillet, Phil M.; Thome, Kurtis J.; Fox, Nigel P.; Morisette, Jeffrey T.

    2001-12-01

    Calibration is critical for useful long-term data records, as well as independent data quality control. However, in the context of Earth observation sensors, post-launch calibration and the associated quality assurance perspective are far from operational. This paper explores the possibility of establishing a global instrumented and automated network of test sites (GIANTS) for post-launch radiometric calibration of Earth observation sensors. It is proposed that a small number of well-instrumented benchmark test sites and data sets for calibration be supported. A core set of sensors, measurements, and protocols would be standardized across all participating test sites and the measurement data sets would undergo identical processing at a central secretariat. The network would provide calibration information to supplement or substitute for on-board calibration, would reduce the effort required by individual agencies, and would provide consistency for cross-platform studies. Central to the GIANTS concept is the use of automation, communication, coordination, visibility, and education, all of which can be facilitated by greater use of advanced in-situ sensor and telecommunication technologies. The goal is to help ensure that the resources devoted to remote sensing calibration benefit the intended user community and facilitate the development of new calibration methodologies (research and development) and future specialists (education and training).

  18. Calibration belt for quality-of-care assessment based on dichotomous outcomes.

    Directory of Open Access Journals (Sweden)

    Stefano Finazzi

    Full Text Available Prognostic models applied in medicine must be validated on independent samples, before their use can be recommended. The assessment of calibration, i.e., the model's ability to provide reliable predictions, is crucial in external validation studies. Besides having several shortcomings, statistical techniques such as the computation of the standardized mortality ratio (SMR and its confidence intervals, the Hosmer-Lemeshow statistics, and the Cox calibration test, are all non-informative with respect to calibration across risk classes. Accordingly, calibration plots reporting expected versus observed outcomes across risk subsets have been used for many years. Erroneously, the points in the plot (frequently representing deciles of risk have been connected with lines, generating false calibration curves. Here we propose a methodology to create a confidence band for the calibration curve based on a function that relates expected to observed probabilities across classes of risk. The calibration belt allows the ranges of risk to be spotted where there is a significant deviation from the ideal calibration, and the direction of the deviation to be indicated. This method thus offers a more analytical view in the assessment of quality of care, compared to other approaches.

  19. Calibration of hydrological models using flow-duration curves

    Directory of Open Access Journals (Sweden)

    I. K. Westerberg

    2011-07-01

    Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of

  20. Calibration of Models Using Groundwater Age (Invited)

    Science.gov (United States)

    Sanford, W. E.

    2009-12-01

    Water-resource managers are frequently concerned with the long-term ability of a groundwater system to deliver volumes of water for both humans and ecosystems under natural and anthropogenic stresses. Analysis of how a groundwater system responds to such stresses usually involves the construction and calibration of a numerical groundwater-flow model. The calibration procedure usually involves the use of both groundwater-level and flux observations. Water-level data are often more abundant, and thus the availability of flux data can be critical, with well discharge and base flow to streams being most often available. Lack of good flux data however is a common occurrence, especially in more arid climates where the sustainability of the water supply may be even more in question. Environmental tracers are frequently being used to estimate the “age” of a water sample, which represents the time the water has been in the subsurface since its arrival at the water table. Groundwater ages provide flux-related information and can be used successfully to help calibrate groundwater models if porosity is well constrained, especially when there is a paucity of other flux data. As several different methods of simulating groundwater age and tracer movement are possible, a review is presented here of the advantages, disadvantages, and potential pitfalls of the various numerical and tracer methods used in model calibration. The usefulness of groundwater ages for model calibration depends on the ability both to interpret a tracer so as to obtain an apparent observed age, and to use a numerical model to obtain an equivalent simulated age observation. Different levels of simplicity and assumptions accompany different methods for calculating the equivalent simulated age observation. The advantages of computational efficiency in certain methods can be offset by error associated with the underlying assumptions. Advective travel-time calculation using path-line tracking in finite

  1. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Reda, Ibrahim; Robinson, Justin

    2016-11-21

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibration service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These

  2. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  3. Histogram-Based Calibration Method for Pipeline ADCs

    Science.gov (United States)

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively. PMID:26070196

  4. SIMULTANEOUS CALIBRATION OF MOLECULAR WEIGHT SEPARATION AND COLUMN DISPERSION OF SEC WITH CHARACTERIZED POLYMER STANDARDS

    Institute of Scientific and Technical Information of China (English)

    CHENG Rongshi; BO Shuqin

    1983-01-01

    With the aid of the theoretical relationship between the calibration relation of a SEC column for the monodisperse polymer species under ideal working condition and the effective relations between the molecular weight and the elution volume for characterized polymer samples, a computational procedure for simultaneous calibration of molecular weight separation and column dispersion is proposed. From the experimental chromatograms of narrow MWD polystyrene standards and broad MWD 1,2-polybutadiene fractions the spreading factors of a SEC column was deduced by the proposed method. The variation of the spreading factor with the elution volume is independent upon the polymer sample used.

  5. VOLUMNECT: measuring volumes with Kinect

    Science.gov (United States)

    Quintino Ferreira, Beatriz; Griné, Miguel; Gameiro, Duarte; Costeira, João. Paulo; Sousa Santos, Beatriz

    2014-03-01

    This article presents a solution to volume measurement object packing using 3D cameras (such as the Microsoft KinectTM). We target application scenarios, such as warehouses or distribution and logistics companies, where it is important to promptly compute package volumes, yet high accuracy is not pivotal. Our application auto- matically detects cuboid objects using the depth camera data and computes their volume and sorting it allowing space optimization. The proposed methodology applies to a point cloud simple computer vision and image processing methods, as connected components, morphological operations and Harris corner detector, producing encouraging results, namely an accuracy in volume measurement of 8mm. Aspects that can be further improved are identified; nevertheless, the current solution is already promising turning out to be cost effective for the envisaged scenarios.

  6. Partial volume correction using structural-functional synergistic resolution recovery: comparison with geometric transfer matrix method.

    Science.gov (United States)

    Kim, Euitae; Shidahara, Miho; Tsoumpas, Charalampos; McGinnity, Colm J; Kwon, Jun Soo; Howes, Oliver D; Turkheimer, Federico E

    2013-06-01

    We validated the use of a novel image-based method for partial volume correction (PVC), structural-functional synergistic resolution recovery (SFS-RR) for the accurate quantification of dopamine synthesis capacity measured using [(18)F]DOPA positron emission tomography. The bias and reliability of SFS-RR were compared with the geometric transfer matrix (GTM) method. Both methodologies were applied to the parametric maps of [(18)F]DOPA utilization rates (ki(cer)). Validation was first performed by measuring repeatability on test-retest scans. The precision of the methodologies instead was quantified using simulated [(18)F]DOPA images. The sensitivity to the misspecification of the full-width-half-maximum (FWHM) of the scanner point-spread-function on both approaches was also assessed. In the in-vivo data, the ki(cer) was significantly increased by application of both PVC procedures while the reliability remained high (intraclass correlation coefficients >0.85). The variability was not significantly affected by either PVC approach (<10% variability in both cases). The corrected ki(cer) was significantly influenced by the FWHM applied in both the acquired and simulated data. This study shows that SFS-RR can effectively correct for partial volume effects to a comparable degree to GTM but with the added advantage that it enables voxelwise analyses, and that the FWHM used can affect the PVC result indicating the importance of accurately calibrating the FWHM used in the recovery model.

  7. Analyzing Global Interdependence. Volume III. Methodological Perspectives and Research Implications,

    Science.gov (United States)

    1974-11-01

    Deutsch, The Nerves of Government: Models of Political Communication and Control (New York: The Free Press of Glencoe, 1963); and JUrgen Habermas ...cybernetics and Habermas ’ Marxian writings on communicative competence. They may make possible respecifications of mixed interest choice situations in ways

  8. Wetlands Research Program. Wetland Evaluation Technique (WET). Volume 2. Methodology.

    Science.gov (United States)

    1987-10-01

    Cockaded Woodpecker Kirtiand’s Warbler REPTILES AND AMPHIBIANS : American Alligator FISH: Sockeye Salmon (Alaskan) Coho Salmon: Non-Alaskan U.S. Stock Alaskan...for wetland-dependent furbearers anid * othe: mammals, repti.les, and amphibians (e.g., beaver, crayfish, alligator, e tc. i. Habitat suitability for...Carolina. Chat 43:10-16. Spaans, A. L. 1978. Status of terns along the Surinam coast. Bird Band. 49:66-76. Sparrowe, R. D. and H. M. Wight. 1975

  9. Liner Technology Program. Volume 3. Liner Development Methodology Manual

    Science.gov (United States)

    1982-05-01

    alkenylsuccinic acids or anhydrides are available. As branched curing agents, DDI and GTRO are among the candidates. 4.1.3 Oxidatively-Stable System ...polarity polymers are carboxyl- or hydroxyl-terminated hydrocatbon I - systems cured with isocyanates or epoxides. The use of imidc epoxy - cured * ! systems ...ester linkages produced by the epoxy cure are the least subject to hydrolytic attack. " 4.1.7 Branched Chain Systems Polymers containing branched

  10. Regional Shelter Analysis Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dennison, Deborah [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Jave [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Miller, Paul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-01

    The fallout from a nuclear explosion has the potential to injure or kill 100,000 or more people through exposure to external gamma (fallout) radiation. Existing buildings can reduce radiation exposure by placing material between fallout particles and exposed people. Lawrence Livermore National Laboratory was tasked with developing an operationally feasible methodology that could improve fallout casualty estimates. The methodology, called a Regional Shelter Analysis, combines the fallout protection that existing buildings provide civilian populations with the distribution of people in various locations. The Regional Shelter Analysis method allows the consideration of (a) multiple building types and locations within buildings, (b) country specific estimates, (c) population posture (e.g., unwarned vs. minimally warned), and (d) the time of day (e.g., night vs. day). The protection estimates can be combined with fallout predictions (or measurements) to (a) provide a more accurate assessment of exposure and injury and (b) evaluate the effectiveness of various casualty mitigation strategies. This report describes the Regional Shelter Analysis methodology, highlights key operational aspects (including demonstrating that the methodology is compatible with current tools), illustrates how to implement the methodology, and provides suggestions for future work.

  11. Calibration of the solar radio spectrometer

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper shows some improvements and new results of calibration of Chinese solar radio spectrometer by analyzing the daily calibration data recorded in the period of 1997-2007. First, the calibration coefficient is fitted for three bands (1.0-2.0 GHz, 2.6-3.8 GHz, 5.2-7.6 GHz) of the spectrometer by using the moving-average method confined by the property of the daily calibration data. By this calibration coefficient, the standard deviation of the calibration result was less than 10 sfu for 95% frequencies of 2.6-3.8 GHz band in 2003. This result is better than that calibrated with the constant coefficient. Second, the calibration coefficient is found in good correlation with local air temperature for most frequencies of 2.6-3.8 GHz band. Moreover, these results are helpful in the research of the quiet solar radio emission.

  12. Calibration and Validation of Measurement System

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Riemann, Sven; Knapp, Wilfried

    The report deals with the calibration of the measuring equipment on board the Wave Dragon, Nissum Bredning prototype.......The report deals with the calibration of the measuring equipment on board the Wave Dragon, Nissum Bredning prototype....

  13. Crop physiology calibration in the CLM

    Directory of Open Access Journals (Sweden)

    I. Bilionis

    2015-04-01

    scalable and adaptive scheme based on sequential Monte Carlo (SMC. The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.

  14. Calibration of the solar radio spectrometer

    Institute of Scientific and Technical Information of China (English)

    TAN ChengMing; YAN YiHua; TAN BaoLin; XU GuiRong

    2009-01-01

    This paper shows some improvements and new results of calibration of Chinese solar radio spectrom-eter by analyzing the daily calibration data recorded in the period of 1997-2007. First, the calibration coefficient is fitted for three bands (1.0-2.0 GHz, 2.6-3.8 GHz, 5.2-7.6 GHz) of the spectrometer by using the moving-average method confined by the property of the daily calibration data. By this calibration coefficient, the standard deviation of the calibration result was less than 10 sfu for 95% frequencies of 2.6-3.8 GHz band in 2003. This result is better than that calibrated with the constant coefficient. Second, the calibration coefficient is found in good correlation with local air temperature for most frequencies of 2.6-3.8 GHz band. Moreover, these results are helpful in the research of the quiet solar radio emission.

  15. Astrid-2 EMMA Magnetic Calibration

    DEFF Research Database (Denmark)

    Merayo, José M.G.; Brauer, Peter; Risbo, Torben

    1998-01-01

    The Swedish micro-satellite Astrid-2 contains a tri-axial fluxgate magnetometer with the sensor co-located with a Technical University of Denmark (DTU) star camera for absolute attitude, and extended about 0.9 m on a hinged boom. The magnetometer is part of the RIT EMMA electric and magnetic fields...... experiment built as a collaboration between the DTU, Department of Automation and the Department of Plasma Physics, The Alfvenlaboratory, Royal Institute of Technology (RIT), Stockholm. The final magnetic calibration of the Astrid-2 satellite was done at the Lovoe Magnetic Observatory under the Geological...... the magnetometer orthogonalized axes and the star camera optical axes was determined from the observed stellar coordinates related to the Earth magnetic field from the Magnetic Observatory. The magnetic calibration of the magnetometer integrated into the flight configured satellite was done in the (almost...

  16. Calibrating thermal behavior of electronics

    Science.gov (United States)

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2016-05-31

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  17. Calibrating thermal behavior of electronics

    Energy Technology Data Exchange (ETDEWEB)

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2017-07-11

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  18. Nonlinear Observers for Gyro Calibration

    Science.gov (United States)

    Thienel, Julie; Sanner, Robert M.

    2003-01-01

    Nonlinear observers for gyro calibration are presented. The first observer estimates a constant gyro bias. The second observer estimates scale factor errors. The third observer estimates the gyro alignment for three orthogonal gyros. The convergence properties of all three observers are discussed. Additionally, all three observers are coupled with a nonlinear control algorithm. The stability of each of the resulting closed loop systems is analyzed. Simulated test results are presented for each system.

  19. Calibrating thermal behavior of electronics

    Energy Technology Data Exchange (ETDEWEB)

    Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.

    2017-01-03

    A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.

  20. Calibration of a Parallel Kinematic Machine Tool

    Institute of Scientific and Technical Information of China (English)

    HE Xiao-mei; DING Hong-sheng; FU Tie; XIE Dian-huang; XU Jin-zhong; LI Hua-feng; LIU Hui-lin

    2006-01-01

    A calibration method is presented to enhance the static accuracy of a parallel kinematic machine tool by using a coordinate measuring machine and a laser tracker. According to the established calibration model and the calibration experiment, the factual 42 kinematic parameters of BKX-I parallel kinematic machine tool are obtained. By circular tests the comparison is made between the calibrated and the uncalibrated parameters and shows that there is 80% improvement in accuracy of this machine tool.

  1. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  2. A Careful Consideration of the Calibration Concept

    Science.gov (United States)

    Phillips, S. D.; Estler, W. T.; Doiron, T.; Eberhardt, K. R.; Levenson, M. S.

    2001-01-01

    This paper presents a detailed discussion of the technical aspects of the calibration process with emphasis on the definition of the measurand, the conditions under which the calibration results are valid, and the subsequent use of the calibration results in measurement uncertainty statements. The concepts of measurement uncertainty, error, systematic error, and reproducibility are also addressed as they pertain to the calibration process. PMID:27500027

  3. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    Science.gov (United States)

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  4. Exploring the Effects of Sampling Locations for Calibrating the Huff Model Using Mobile Phone Location Data

    Directory of Open Access Journals (Sweden)

    Shiwei Lu

    2017-01-01

    Full Text Available The introduction of the Huff model is of critical significance in many fields, including urban transport, optimal location planning, economics and business analysis. Moreover, parameters calibration is a crucial procedure before using the model. Previous studies have paid much attention to calibrating the spatial interaction model for human mobility research. However, are whole sampling locations always the better solution for model calibration? We use active tracking data of over 16 million cell phones in Shenzhen, a metropolitan city in China, to evaluate the calibration accuracy of Huff model. Specifically, we choose five business areas in this city as destinations and then randomly select a fixed number of cell phone towers to calibrate the parameters in this spatial interaction model. We vary the selected number of cell phone towers by multipliers of 30 until we reach the total number of towers with flows to the five destinations. We apply the least square methods for model calibration. The distribution of the final sum of squared error between the observed flows and the estimated flows indicates that whole sampling locations are not always better for the outcomes of this spatial interaction model. Instead, fewer sampling locations with higher volume of trips could improve the calibration results. Finally, we discuss implications of this finding and suggest an approach to address the high-accuracy model calibration solution.

  5. A Functional HAZOP Methodology

    DEFF Research Database (Denmark)

    Liin, Netta; Lind, Morten; Jensen, Niels

    2010-01-01

    A HAZOP methodology is presented where a functional plant model assists in a goal oriented decomposition of the plant purpose into the means of achieving the purpose. This approach leads to nodes with simple functions from which the selection of process and deviation variables follow directly....... The functional HAZOP methodology lends itself directly for implementation into a computer aided reasoning tool to perform root cause and consequence analysis. Such a tool can facilitate finding causes and/or consequences far away from the site of the deviation. A functional HAZOP assistant is proposed...... and investigated in a HAZOP study of an industrial scale Indirect Vapor Recompression Distillation pilot Plant (IVaRDiP) at DTU-Chemical and Biochemical Engineering. The study shows that the functional HAZOP methodology provides a very efficient paradigm for facilitating HAZOP studies and for enabling reasoning...

  6. A quick telemanipulator calibration and repeatability method with applications

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, J.F.; Haley, D.C. [Oak Ridge National Lab., TN (United States). Robotics & Process Systems Div.

    1994-09-01

    This paper will present a methodology that was used to calibrate and measure the repeatability of two telemanipulators at Oak Ridge National Laboratory. The global accuracy of the method was 0.05 in. ({approx_equal} 1.3 mm), and the orientation accuracy was approximately 6 min ({approx_equal} 0.002 rads). For most teleoperator systems, these accuracies are more than adequate because of the construction of the mechanism and sensor capabilities (e.g., typically 12 bits of resolution). Although industrial robots require accuracies of about 0.05 mm or better, telemanipulators do not.

  7. Variability among polysulphone calibration curves

    Energy Technology Data Exchange (ETDEWEB)

    Casale, G R [University of Rome ' La Sapienza' , Physics Department, P.le A. Moro 2, I-00185, Rome (Italy); Borra, M [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy); Colosimo, A [University of Rome ' La Sapienza' , Department of Human Physiology and Pharmacology, P.le A. Moro 2, I-00185, Rome (Italy); Colucci, M [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy); Militello, A [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy); Siani, A M [University of Rome ' La Sapienza' , Physics Department, P.le A. Moro 2, I-00185, Rome (Italy); Sisto, R [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy)

    2006-09-07

    Within an epidemiological study regarding the correlation between skin pathologies and personal ultraviolet (UV) exposure due to solar radiation, 14 field campaigns using polysulphone (PS) dosemeters were carried out at three different Italian sites (urban, semi-rural and rural) in every season of the year. A polysulphone calibration curve for each field experiment was obtained by measuring the ambient UV dose under almost clear sky conditions and the corresponding change in the PS film absorbance, prior and post exposure. Ambient UV doses were measured by well-calibrated broad-band radiometers and by electronic dosemeters. The dose-response relation was represented by the typical best fit to a third-degree polynomial and it was parameterized by a coefficient multiplying a cubic polynomial function. It was observed that the fit curves differed from each other in the coefficient only. It was assessed that the multiplying coefficient was affected by the solar UV spectrum at the Earth's surface whilst the polynomial factor depended on the photoinduced reaction of the polysulphone film. The mismatch between the polysulphone spectral curve and the CIE erythemal action spectrum was responsible for the variability among polysulphone calibration curves. The variability of the coefficient was related to the total ozone amount and the solar zenith angle. A mathematical explanation of such a parameterization was also discussed.

  8. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  9. Model Calibration in Watershed Hydrology

    Science.gov (United States)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  10. PACS photometer calibration block analysis

    CERN Document Server

    Moór, A; Kiss, Cs; Balog, Z; Billot, N; Marton, G

    2013-01-01

    The absolute stability of the PACS bolometer response over the entire mission lifetime without applying any corrections is about 0.5% (standard deviation) or about 8% peak-to-peak. This fantastic stability allows us to calibrate all scientific measurements by a fixed and time-independent response file, without using any information from the PACS internal calibration sources. However, the analysis of calibration block observations revealed clear correlations of the internal source signals with the evaporator temperature and a signal drift during the first half hour after the cooler recycling. These effects are small, but can be seen in repeated measurements of standard stars. From our analysis we established corrections for both effects which push the stability of the PACS bolometer response to about 0.2% (stdev) or 2% in the blue, 3% in the green and 5% in the red channel (peak-to-peak). After both corrections we still see a correlation of the signals with PACS FPU temperatures, possibly caused by parasitic h...

  11. Extended Commissioning and Calibration of the Dual-Beam Imaging Polarimeter

    CERN Document Server

    Masiero, Joseph; Harrington, David; Lin, Haosheng

    2008-01-01

    In our previous paper (Masiero et al. 2007) we presented the design and initial calibrations of the Dual-Beam Imaging Polarimeter (DBIP), a new optical instrument for the University of Hawaii's 2.2 m telescope on the summit of Mauna Kea, Hawaii. In this followup work we discuss our full-Stokes mode commissioning including crosstalk determination and our typical observing methodology.

  12. Recent developments of in-vessel calibration of mid-IR cameras at JET

    Science.gov (United States)

    Balboa, I.; Silburn, S.; Drewelow, P.; Huber, V.; Huber, A.; Kinna, D.; Price, M.; Matthews, G. F.; Collins, S.; Fessey, J.; Rack, M.; Trimble, P.; Zastrow, K.-D.

    2016-11-01

    Recent improvements in software tools and methodology have allowed us to perform a more comprehensive in-vessel calibration for all mid-infrared camera systems at JET. A comparison of experimental methods to calculate the non-uniformity correction is described as well as the linearity for the different camera systems. Measurements of the temperature are assessed for the different diagnostics.

  13. Agile vs Traditional Methodologies in Developing Information Systems

    Directory of Open Access Journals (Sweden)

    Pere Tumbas

    2006-12-01

    Full Text Available After the review of principles and concepts of structural and object-oriented development of information systems, the work points to the elements of agile approaches and gives short description of some selected agile methodologies. After these reviews, their comparision according to criteria is done. The first criterion reviews the volume of methodology in which project management is used in developing information systems The second criterion shows if the processes, defined by methodology, cover appropriate phase of the life cycle. The last criterion shows if methodology iniciates the use of skills and tools in the life cycle phases of developing information systems. Finally, the work compares, according to the key elements of development, traditional (structural and object methodologies with agile methodologies.

  14. Renormalized Volume

    Science.gov (United States)

    Gover, A. Rod; Waldron, Andrew

    2017-09-01

    We develop a universal distributional calculus for regulated volumes of metrics that are suitably singular along hypersurfaces. When the hypersurface is a conformal infinity we give simple integrated distribution expressions for the divergences and anomaly of the regulated volume functional valid for any choice of regulator. For closed hypersurfaces or conformally compact geometries, methods from a previously developed boundary calculus for conformally compact manifolds can be applied to give explicit holographic formulæ for the divergences and anomaly expressed as hypersurface integrals over local quantities (the method also extends to non-closed hypersurfaces). The resulting anomaly does not depend on any particular choice of regulator, while the regulator dependence of the divergences is precisely captured by these formulæ. Conformal hypersurface invariants can be studied by demanding that the singular metric obey, smoothly and formally to a suitable order, a Yamabe type problem with boundary data along the conformal infinity. We prove that the volume anomaly for these singular Yamabe solutions is a conformally invariant integral of a local Q-curvature that generalizes the Branson Q-curvature by including data of the embedding. In each dimension this canonically defines a higher dimensional generalization of the Willmore energy/rigid string action. Recently, Graham proved that the first variation of the volume anomaly recovers the density obstructing smooth solutions to this singular Yamabe problem; we give a new proof of this result employing our boundary calculus. Physical applications of our results include studies of quantum corrections to entanglement entropies.

  15. Model calibration and validation of an impact test simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, F. M. (François M.); Wilson, A. C. (Amanda C.); Havrilla, G. N. (George N.)

    2001-01-01

    This paper illustrates the methodology being developed at Los Alamos National Laboratory for the validation of numerical simulations for engineering structural dynamics. The application involves the transmission of a shock wave through an assembly that consists of a steel cylinder and a layer of elastomeric (hyper-foam) material. The assembly is mounted on an impact table to generate the shock wave. The input acceleration and three output accelerations are measured. The main objective of the experiment is to develop a finite element representation of the system capable of reproducing the test data with acceptable accuracy. Foam layers of various thicknesses and several drop heights are considered during impact testing. Each experiment is replicated several times to estimate the experimental variability. Instead of focusing on the calibration of input parameters for a single configuration, the numerical model is validated for its ability to predict the response of three different configurations (various combinations of foam thickness and drop height). Design of Experiments is implemented to perform parametric and statistical variance studies. Surrogate models are developed to replace the computationally expensive numerical simulation. Variables of the finite element model are separated into calibration variables and control variables, The models are calibrated to provide numerical simulations that correctly reproduce the statistical variation of the test configurations. The calibration step also provides inference for the parameters of a high strain-rate dependent material model of the hyper-foam. After calibration, the validity of the numerical simulation is assessed through its ability to predict the response of a fourth test setup.

  16. Calibration using constrained smoothing with applications to mass spectrometry data.

    Science.gov (United States)

    Feng, Xingdong; Sedransk, Nell; Xia, Jessie Q

    2014-06-01

    Linear regressions are commonly used to calibrate the signal measurements in proteomic analysis by mass spectrometry. However, with or without a monotone (e.g., log) transformation, data from such functional proteomic experiments are not necessarily linear or even monotone functions of protein (or peptide) concentration except over a very restricted range. A computationally efficient spline procedure improves upon linear regression. However, mass spectrometry data are not necessarily homoscedastic; more often the variation of measured concentrations increases disproportionately near the boundaries of the instruments measurement capability (dynamic range), that is, the upper and lower limits of quantitation. These calibration difficulties exist with other applications of mass spectrometry as well as with other broad-scale calibrations. Therefore the method proposed here uses a functional data approach to define the calibration curve and also the limits of quantitation under the two assumptions: (i) that the variance is a bounded, convex function of concentration; and (ii) that the calibration curve itself is monotone at least between the limits of quantitation, but not necessarily outside these limits. Within this paradigm, the limit of detection, where the signal is definitely present but not measurable with any accuracy, is also defined. An iterative approach draws on existing smoothing methods to account simultaneously for both restrictions and is shown to achieve the global optimal convergence rate under weak conditions. This approach can also be implemented when convexity is replaced by other (bounded) restrictions. Examples from Addona et al. (2009, Nature Biotechnology 27, 663-641) both motivate and illustrate the effectiveness of this functional data methodology when compared with the simpler linear regressions and spline techniques.

  17. An Overview of MODIS Radiometric Calibration and Characterization

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) is one of the key instruments for NASA's Earth Observing System (EOS), currently operating on both the Terra and Aqua satellites. The MODIS is a major advance over the previous generation of sensors in terms of its spectral, spatial, and temporal resolutions. It has 36 spectral bands: 20 reflective solar bands (RSB) with center wavelengths from 0.41 to 2.1μm and 16 thermal emissive bands (TEB) with center wavelengths from 3.7 to 14.4μm,making observations at three spatial resolutions: 250 m (bands 1-2), 500 m (bands 3-7), and 1km (bands 8-36). MODIS is a cross-track scanning radiometer with a wide field-of-view, providing a complete global coverage of the Earth in less than 2 days. Both Terra and Aqua MODIS went through extensive pre-launch calibration and characterization at various levels. In orbit, the calibration and characterization tasks are performed using its on-board calibrators (OBCs) that include a solar diffuser (SD) and a solar diffuser stability monitor (SDSM), a v-grooved flat panel blackbody (BB), and a spectro-radiometric calibration assembly (SRCA). In this paper, we present an overview of MODIS calibration and characterization activities, methodologies, and lessons learned from pre-launch characterization and in-orbit operation. Key issues discussed in this paper include in-orbit efforts of monitoring the noise characteristics of the detectors,tracking the solar diffuser and optics degradations, and updating the sensor's response versus scan angle.The experiences and lessons learned through MODIS have played and will continue to play major roles in the design and characterization of future sensors.

  18. HCAL Calibration Status in Summer 2017

    CERN Document Server

    CMS Collaboration

    2017-01-01

    This note presents the status of the HCAL calibration in Summer 2017. In particular, results on the aging of the hadron endcap (HE) detector measured using the laser calibration system and the calibration of the hadron forward (HF) detector using electrons from Z boson decays are discussed.

  19. Net analyte signal calculation for multivariate calibration

    NARCIS (Netherlands)

    Ferre, J.; Faber, N.M.

    2003-01-01

    A unifying framework for calibration and prediction in multivariate calibration is shown based on the concept of the net analyte signal (NAS). From this perspective, the calibration step can be regarded as the calculation of a net sensitivity vector, whose length is the amount of net signal when the

  20. Code Calibration as a Decision Problem

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.

    1993-01-01

    Calibration of partial coefficients for a class of structures where no code exists is considered. The partial coefficients are determined such that the difference between the reliability for the different structures in the class considered and a target reliability level is minimized. Code...... calibration on a decision theoretical basis is discussed. Results from code calibration for rubble mound breakwater designs are shown....

  1. Backscatter nephelometer to calibrate scanning lidar

    Science.gov (United States)

    Cyle E. Wold; Vladmir A. Kovalev; Wei Min Hao

    2008-01-01

    The general concept of an open-path backscatter nephelometer, its design, principles of calibration and the operational use are discussed. The research-grade instrument, which operates at the wavelength 355 nm, will be co-located with a scanning-lidar at measurement sites near wildfires, and used for the lidar calibration. Such a near-end calibration has significant...

  2. 14 CFR 33.45 - Calibration tests.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Calibration tests. 33.45 Section 33.45... STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each engine must be subjected to the calibration tests necessary to establish its power characteristics...

  3. 14 CFR 33.85 - Calibration tests.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Calibration tests. 33.85 Section 33.85... STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.85 Calibration tests. (a) Each engine must be subjected to those calibration tests necessary to establish its power characteristics and...

  4. Systems and methods of eye tracking calibration

    DEFF Research Database (Denmark)

    2014-01-01

    Methods and systems to facilitate eye tracking control calibration are provided. One or more objects are displayed on a display of a device, where the one or more objects are associated with a function unrelated to a calculation of one or more calibration parameters. The one or more calibration...

  5. Timing calibration and spectral cleaning of LOFAR time series data

    CERN Document Server

    Corstanje, A; Enriquez, J E; Falcke, H; Hörandel, J R; Krause, M; Nelles, A; Rachen, J P; Schellart, P; Scholten, O; ter Veen, S; Thoudam, S; Trinh, T N G

    2016-01-01

    We describe a method for spectral cleaning and timing calibration of short voltage time series data from individual radio interferometer receivers. It makes use of the phase differences in Fast Fourier Transform (FFT) spectra across antenna pairs. For strong, localized terrestrial sources these are stable over time, while being approximately uniform-random for a sum over many sources or for noise. Using only milliseconds-long datasets, the method finds the strongest interfering transmitters, a first-order solution for relative timing calibrations, and faulty data channels. No knowledge of gain response or quiescent noise levels of the receivers is required. With relatively small data volumes, this approach is suitable for use in an online system monitoring setup for interferometric arrays. We have applied the method to our cosmic-ray data collection, a collection of measurements of short pulses from extensive air showers, recorded by the LOFAR radio telescope. Per air shower, we have collected 2 ms of raw tim...

  6. Changing methodologies in TESOL

    CERN Document Server

    Spiro, Jane

    2013-01-01

    Covering core topics from vocabulary and grammar to teaching, writing speaking and listening, this textbook shows you how to link research to practice in TESOL methodology. It emphasises how current understandings have impacted on the language classroom worldwide and investigates the meaning of 'methods' and 'methodology' and the importance of these for the teacher: as well as the underlying assumptions and beliefs teachers bring to bear in their practice. By introducing you to language teaching approaches, you will explore the way these are influenced by developments in our understanding of l

  7. Methodology for research I.

    Science.gov (United States)

    Garg, Rakesh

    2016-09-01

    The conduct of research requires a systematic approach involving diligent planning and its execution as planned. It comprises various essential predefined components such as aims, population, conduct/technique, outcome and statistical considerations. These need to be objective, reliable and in a repeatable format. Hence, the understanding of the basic aspects of methodology is essential for any researcher. This is a narrative review and focuses on various aspects of the methodology for conduct of a clinical research. The relevant keywords were used for literature search from various databases and from bibliographies of the articles.

  8. Making the Grade? Globalisation and the Training Market in Australia. Volume 1 [and] Volume 2.

    Science.gov (United States)

    Hall, Richard; Buchanan, John; Bretherton, Tanya; van Barneveld, Kristin; Pickersgill, Richard

    This two-volume document reports on a study of globalization and Australia's training market. Volume 1 begins by examining debate on globalization and industry training in Australia. Discussed next is the study methodology, which involved field studies of the metals and engineering industry in South West Sydney and the Hunter and the information…

  9. 42 CFR 493.1255 - Standard: Calibration and calibration verification procedures.

    Science.gov (United States)

    2010-10-01

    ..., if possible, traceable to a reference method or reference material of known value; and (ii) Including... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Calibration and calibration verification... for Nonwaived Testing Analytic Systems § 493.1255 Standard: Calibration and calibration...

  10. Spectral calibration for convex grating imaging spectrometer

    Science.gov (United States)

    Zhou, Jiankang; Chen, Xinhua; Ji, Yiqun; Chen, Yuheng; Shen, Weimin

    2013-12-01

    Spectral calibration of imaging spectrometer plays an important role for acquiring target accurate spectrum. There are two spectral calibration types in essence, the wavelength scanning and characteristic line sampling. Only the calibrated pixel is used for the wavelength scanning methods and he spectral response function (SRF) is constructed by the calibrated pixel itself. The different wavelength can be generated by the monochromator. The SRF is constructed by adjacent pixels of the calibrated one for the characteristic line sampling methods. And the pixels are illuminated by the narrow spectrum line and the center wavelength of the spectral line is exactly known. The calibration result comes from scanning method is precise, but it takes much time and data to deal with. The wavelength scanning method cannot be used in field or space environment. The characteristic line sampling method is simple, but the calibration precision is not easy to confirm. The standard spectroscopic lamp is used to calibrate our manufactured convex grating imaging spectrometer which has Offner concentric structure and can supply high resolution and uniform spectral signal. Gaussian fitting algorithm is used to determine the center position and the Full-Width-Half-Maximum(FWHM)of the characteristic spectrum line. The central wavelengths and FWHMs of spectral pixels are calibrated by cubic polynomial fitting. By setting a fitting error thresh hold and abandoning the maximum deviation point, an optimization calculation is achieved. The integrated calibration experiment equipment for spectral calibration is developed to enhance calibration efficiency. The spectral calibration result comes from spectral lamp method are verified by monochromator wavelength scanning calibration technique. The result shows that spectral calibration uncertainty of FWHM and center wavelength are both less than 0.08nm, or 5.2% of spectral FWHM.

  11. Gemini Planet Imager Observational Calibrations II: Detector Performance and Calibration

    CERN Document Server

    Ingraham, Patrick; Sadakuni, Naru; Ruffio, Jean-Baptiste; Maire, Jerome; Chilcote, Jeff; Larkin, James; Marchis, Franck; Galicher, Raphael; Weiss, Jason

    2014-01-01

    The Gemini Planet Imager is a newly commissioned facility instrument designed to measure the near-infrared spectra of young extrasolar planets in the solar neighborhood and obtain imaging polarimetry of circumstellar disks. GPI's science instrument is an integral field spectrograph that utilizes a HAWAII-2RG detector with a SIDECAR ASIC readout system. This paper describes the detector characterization and calibrations performed by the GPI Data Reduction Pipeline to compensate for effects including bad/hot/cold pixels, persistence, non-linearity, vibration induced microphonics and correlated read noise.

  12. The methodological cat

    Directory of Open Access Journals (Sweden)

    Marin Dinu

    2014-03-01

    Full Text Available Economics understands action as having the connotation of here and now, the proof being that it excessively uses, for explicative purposes, two limitations of sense: space is seen as the place with a private destination (through the cognitive dissonance of methodological individualism, and time is seen as the short term (through the dystopia of rational markets.

  13. Video: Modalities and Methodologies

    Science.gov (United States)

    Hadfield, Mark; Haw, Kaye

    2012-01-01

    In this article, we set out to explore what we describe as the use of video in various modalities. For us, modality is a synthesizing construct that draws together and differentiates between the notion of "video" both as a method and as a methodology. It encompasses the use of the term video as both product and process, and as a data collection…

  14. Methodological Advances in Dea

    NARCIS (Netherlands)

    L. Cherchye (Laurens); G.T. Post (Thierry)

    2001-01-01

    textabstractWe survey the methodological advances in DEA over the last 25 years and discuss the necessary conditions for a sound empirical application. We hope this survey will contribute to the further dissemination of DEA, the knowledge of its relative strengths and weaknesses, and the tools

  15. Precise Astronomical Flux Calibration and its Impact on Studying the Nature of Dark Energy

    CERN Document Server

    Stubbs, Christopher W

    2016-01-01

    Measurements of the luminosity of type Ia supernovae vs. redshift provided the original evidence for the accelerating expansion of the Universe and the existence of dark energy. Despite substantial improvements in survey methodology, systematic uncertainty in flux calibration dominates the error budget for this technique, exceeding both statistics and other systematic uncertainties. Consequently, any further collection of type Ia supernova data will fail to refine the constraints on the nature of dark energy unless we also improve the state of the art in astronomical flux calibration to the order of 1%. We describe how these systematic errors arise from calibration of instrumental sensitivity, atmospheric transmission, and Galactic extinction, and discuss ongoing efforts to meet the 1% precision challenge using white dwarf stars as celestial standards, exquisitely calibrated detectors as fundamental metrologic standards, and real-time atmospheric monitoring.

  16. Photometric Calibrations for the SIRTF Infrared Spectrograph

    CERN Document Server

    Morris, P W; Herter, T L; Armus, L; Houck, J; Sloan, G

    2002-01-01

    The SIRTF InfraRed Spectrograph (IRS) is faced with many of the same calibration challenges that were experienced in the ISO SWS calibration program, owing to similar wavelength coverage and overlapping spectral resolutions of the two instruments. Although the IRS is up to ~300 times more sensitive and without moving parts, imposing unique calibration challenges on their own, an overlap in photometric sensitivities of the high-resolution modules with the SWS grating sections allows lessons, resources, and certain techniques from the SWS calibration programs to be exploited. We explain where these apply in an overview of the IRS photometric calibration planning.

  17. Control volume based hydrocephalus research

    Science.gov (United States)

    Cohen, Benjamin; Voorhees, Abram; Wei, Timothy

    2008-11-01

    Hydrocephalus is a disease involving excess amounts of cerebral spinal fluid (CSF) in the brain. Recent research has shown correlations to pulsatility of blood flow through the brain. However, the problem to date has presented as too complex for much more than statistical analysis and understanding. This talk will highlight progress on developing a fundamental control volume approach to studying hydrocephalus. The specific goals are to select physiologically control volume(s), develop conservation equations along with the experimental capabilities to accurately quantify terms in those equations. To this end, an in vitro phantom is used as a simplified model of the human brain. The phantom's design consists of a rigid container filled with a compressible gel. The gel has a hollow spherical cavity representing a ventricle and a cylindrical passage representing the aquaducts. A computer controlled piston pump supplies pulsatile volume fluctuations into and out of the flow phantom. MRI is used to measure fluid velocity, and volume change as functions of time. Independent pressure measurements and flow rate measurements are used to calibrate the MRI data. These data are used as a framework for future work with live patients.

  18. Muon Energy Calibration of the MINOS Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Miyagawa, Paul S. [Somerville College, Oxford (United Kingdom)

    2004-01-01

    MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.

  19. Muon Energy Calibration of the MINOS Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Miyagawa, Paul S.

    2004-09-01

    MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by {approx} 10%, which is equivalent to increasing the amount of data by 20%.

  20. "Calibration-on-the-spot": How to calibrate an EMCCD camera from its images.

    Science.gov (United States)

    Mortensen, Kim I; Flyvbjerg, Henrik

    2016-07-06

    In order to count photons with a camera, the camera must be calibrated. Photon counting is necessary, e.g., to determine the precision of localization-based super-resolution microscopy. Here we present a protocol that calibrates an EMCCD camera from information contained in isolated, diffraction-limited spots in any image taken by the camera, thus making dedicated calibration procedures redundant by enabling calibration post festum, from images filed without calibration information.

  1. Calibration-on-the-spot”: How to calibrate an EMCCD camera from its images

    DEFF Research Database (Denmark)

    Mortensen, Kim; Flyvbjerg, Henrik

    2016-01-01

    In order to count photons with a camera, the camera must be calibrated. Photon counting is necessary, e.g., to determine the precision of localization-based super-resolution microscopy. Here we present a protocol that calibrates an EMCCD camera from information contained in isolated, diffraction......-limited spots in any image taken by the camera, thus making dedicated calibration procedures redundant by enabling calibration post festum, from images filed without calibration information....

  2. Calibration of the Cherenkov Telescope Array

    CERN Document Server

    Gaug, Markus; Berge, David; Reyes, Raquel de los; Doro, Michele; Foerster, Andreas; Maccarone, Maria Concetta; Parsons, Dan; van Eldik, Christopher

    2015-01-01

    The construction of the Cherenkov Telescope Array is expected to start soon. We will present the baseline methods and their extensions currently foreseen to calibrate the observatory. These are bound to achieve the strong requirements on allowed systematic uncertainties for the reconstructed gamma-ray energy and flux scales, as well as on the pointing resolution, and on the overall duty cycle of the observatory. Onsite calibration activities are designed to include a robust and efficient calibration of the telescope cameras, and various methods and instruments to achieve calibration of the overall optical throughput of each telescope, leading to both inter-telescope calibration and an absolute calibration of the entire observatory. One important aspect of the onsite calibration is a correct understanding of the atmosphere above the telescopes, which constitutes the calorimeter of this detection technique. It is planned to be constantly monitored with state-of-the-art instruments to obtain a full molecular and...

  3. Verification of L-band SAR calibration

    Science.gov (United States)

    Larson, R. W.; Jackson, P. L.; Kasischke, E.

    1985-01-01

    Absolute calibration of a digital L-band SAR system to an accuracy of better than 3 dB has been verified. This was accomplished with a calibration signal generator that produces the phase history of a point target. This signal relates calibration values to various SAR data sets. Values of radar cross-section (RCS) of reference reflectors were obtained using a derived calibration relationship for the L-band channel on the ERIM/CCRS X-C-L SAR system. Calibrated RCS values were compared to known RCS values of each reference reflector for verification and to obtain an error estimate. The calibration was based on the radar response to 21 calibrated reference reflectors.

  4. Radio Interferometric Calibration Using The SAGE Algorithm

    CERN Document Server

    Kazemi, S; Zaroubi, S; de Bruyn, A G; Koopmans, L V E; Noordam, J

    2010-01-01

    The aim of the new generation of radio synthesis arrays such as LOFAR and SKA is to achieve much higher sensitivity, resolution and frequency coverage than what is available now. To accomplish this goal, the accuracy of the calibration techniques used is of considerable importance. Moreover, since these telescopes produce huge amounts of data, speed of convergence of calibration is a major bottleneck. The errors in calibration are due to system noise (sky and instrumental) as well as the estimation errors introduced by the calibration technique itself, which we call "solver noise". We define solver noise as the "distance" between the optimal solution, the true value of the unknowns corrupted by the system noise, and the solution obtained by calibration. We present the Space Alternating Generalized Expectation Maximization (SAGE) calibration technique, which is a modification of the Expectation Maximization algorithm, and compare its performance with the traditional Least Squares calibration based on the level...

  5. Infantry Weapons Test Methodology Study. Volume 3. Light Machine Gun Test Methodology

    Science.gov (United States)

    1972-06-01

    additional target array. These are described in .the following section. 5.- RANGE CONCEPTS 1 (a) Firing Positions - The defense tesi facility currently...coincides or nearly coin- ground to low ground, and when firing cides with the long axis of the target. into abruptly rising ground. This type of fire is

  6. Comparison of a priori calibration models for respiratory inductance plethysmography during running.

    Science.gov (United States)

    Leutheuser, Heike; Heyde, Christian; Gollhofer, Albert; Eskofier, Bjoern M

    2014-01-01

    Respiratory inductive plethysmography (RIP) has been introduced as an alternative for measuring ventilation by means of body surface displacement (diameter changes in rib cage and abdomen). Using a posteriori calibration, it has been shown that RIP may provide accurate measurements for ventilatory tidal volume under exercise conditions. Methods for a priori calibration would facilitate the application of RIP. Currently, to the best knowledge of the authors, none of the existing ambulant procedures for RIP calibration can be used a priori for valid subsequent measurements of ventilatory volume under exercise conditions. The purpose of this study is to develop and validate a priori calibration algorithms for ambulant application of RIP data recorded in running exercise. We calculated Volume Motion Coefficients (VMCs) using seven different models on resting data and compared the root mean squared error (RMSE) of each model applied on running data. Least squares approximation (LSQ) without offset of a two-degree-of-freedom model achieved the lowest RMSE value. In this work, we showed that a priori calibration of RIP exercise data is possible using VMCs calculated from 5 min resting phase where RIP and flowmeter measurements were performed simultaneously. The results demonstrate that RIP has the potential for usage in ambulant applications.

  7. Calibration of TOB+ Thermometer's Cards

    CERN Document Server

    Banitt, Daniel

    2014-01-01

    Motivation - Under the new upgrade of the CMS detector the working temperature of the trackers had been reduced to -27 Celsius degrees. Though the thermal sensors themselves (Murata and Fenwal thermistors) are effective at these temperatures, the max1542 PLC (programmable logic controller) cards, interpreting the resistance of the thermal sensors into DC counts usable by the DCS (detector control system), are not designed for these temperatures in which the counts exceed their saturation and therefor had to be replaced. In my project I was in charge of handling the emplacement and calibration of the new PLC cards to the TOB (tracker outer barrel) control system.

  8. AFFTC Standard Airspeed Calibration Procedures

    Science.gov (United States)

    1981-06-01

    25x0UIXQXQ Results of groundLpeed course calibration are normally pre- sented in the following plots: 1. .AvP vs Vi Ŗ. All vs V ic 3. AMPC vs Mic .4...8217Average AfPeavgpo, tion correction AM /AH 10-5 per and figure V 9 PC PC feet . fu V AYpc" x q3 @ , Average position avg corred ion (AM @ AMPC /AVPC...instrument error 0 M ic From and 0), Mach number p Chart 8.5 in reference’l (AFTR 6273) (DO AMPPacer poqition error calibra- Pc tion at9 S( AMpc /’,HpC)p

  9. Methodology for the use of proportional counters in pulsed fast neutron yield measurements

    OpenAIRE

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E.; Pavez, Cristian; Soto, Leopoldo

    2011-01-01

    This paper introduces in full detail a methodology for the measurement of neutron yield and the necessary efficiency calibration, to be applied to the intensity measurement of neutron bursts where individual neutrons are not resolved in time, for any given moderated neutron proportional counter array. The method allows efficiency calibration employing the detection neutrons arising from an isotopic neutron source. Full statistical study of the procedure is descripted, taking into account cont...

  10. Expanding the Methodological Imagination

    Science.gov (United States)

    Fine, Michelle

    2007-01-01

    This article contains reflections provoked by the articles in this volume of "The Counseling Psychologist." As a relative outsider to counseling psychology, the author thoroughly enjoyed immersing herself in these contributions and then extracting a set of thoughts inspired by the writers.

  11. Calibration of Gamma-ray Burst Polarimeter POLAR

    CERN Document Server

    Xiao, H L; Bao, T W; Batsch, T; Bernasconi, T; Cernuda, I; Chai, J Y; Dong, Y W; Gauvin, N; Kole, M; Kong, M N; Kong, S W; Li, L; Liu, J T; Liu, X; Marcinkowski, R; Orsi, S; Pohl, M; Produit, N; Rapin, D; Rutczynska, A; Rybka, D; Shi, H L; Song, L M; Sun, J C; Szabelski, J; Wu, B B; Wang, R J; Wen, X; Xu, H H; Zhang, L; Zhang, L Y; Zhang, S N; Zhang, X F; Zhang, Y J; Zwolinska, A

    2015-01-01

    Gamma Ray Bursts (GRBs) are the strongest explosions in the universe which might be associated with creation of black holes. Magnetic field structure and burst dynamics may influence polarization of the emitted gamma-rays. Precise polarization detection can be an ultimate tool to unveil the true GRB mechanism. POLAR is a space-borne Compton scattering detector for precise measurements of the GRB polarization. It consists of a 40$\\times$40 array of plastic scintillator bars read out by 25 multi-anode PMTs (MaPMTs). It is scheduled to be launched into space in 2016 onboard of the Chinese space laboratory TG2. We present a dedicated methodology for POLAR calibration and some calibration results based on the combined use of the laboratory radioactive sources and polarized X-ray beams from the European Synchrotron Radiation Facility. They include calibration of the energy response, computation of the energy conversion factor vs. high voltage as well as determination of the threshold values, crosstalk contributions...

  12. Pulse-based internal calibration of polarimetric SAR

    DEFF Research Database (Denmark)

    Dall, Jørgen; Skou, Niels; Christensen, Erik Lintz

    1994-01-01

    Internal calibration greatly diminishes the dependence on calibration target deployment compared to external calibration. Therefore the Electromagnetics Institute (EMI) at the Technical University of Denmark (TUD) has equipped its polarimetric SAR, EMISAR, with several calibration loops and devel......Internal calibration greatly diminishes the dependence on calibration target deployment compared to external calibration. Therefore the Electromagnetics Institute (EMI) at the Technical University of Denmark (TUD) has equipped its polarimetric SAR, EMISAR, with several calibration loops...

  13. A List of Bright Interferometric Calibrators measured at the ESO VLTI

    CERN Document Server

    Richichi, A; Davis, J

    2009-01-01

    In a previous publication (Richichi & Percheron 2005) we described a program of observations of candidate calibrator stars at the ESO Very Large Telescope Interferometer (VLTI), and presented the main results from a statistical point of view. In the present paper, we concentrate on establishing a new homogeneous group of bright interferometric calibrators, based entirely on publicly available K-band VLTI observations carried out with the VINCI instrument up to July 2004. For this, we have defined a number of selection criteria for the quality and volume of the observations, and we have accordingly selected a list of 17 primary and 47 secondary calibrators. We have developed an approach to a robust global fit for the angular diameters using the whole volume of quality-controlled data, largely independent of a priori assumptions. Our results have been compared with direct measurements, and indirect estimates based on spectrophotometric methods, and general agreement is found within the combined uncertaintie...

  14. Methodology, Meditation, and Mindfulness

    Directory of Open Access Journals (Sweden)

    Balveer Singh Sikh

    2016-04-01

    Full Text Available Understanding the nondualistic nature of mindfulness is a complex and challenging task particularly when most clinical psychology draws from Western methodologies and methods. In this article, we argue that the integration of philosophical hermeneutics with Eastern philosophy and practices may provide a methodology and methods to research mindfulness practice. Mindfulness hermeneutics brings together the nondualistically aligned Western philosophies of Heidegger and Gadamer and selected Eastern philosophies and practices in an effort to bridge the gap between these differing worldviews. Based on the following: (1 fusion of horizons, (2 being in a hermeneutic circle, (3 understanding as intrinsic to awareness, and (4 the ongoing practice of meditation, a mindfulness hermeneutic approach was used to illuminate deeper understandings of mindfulness practice in ways that are congruent with its underpinning philosophies.

  15. METHODOLOGICAL BASES OF OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Lanskaya D. V.

    2014-09-01

    Full Text Available Outsourcing is investigated from a position of finding steady and unique competitive advantages of a public corporation due to attraction of carriers of unique intellectual and uses social capitals of the specialized companies within the institutional theory. Key researchers and events in the history of outsourcing are marked out, the existing approaches to definition of the concept of outsourcing, advantage and risks from application of technology of outsourcing are considered. It is established that differences of outsourcing, sub-contraction and cooperation are not in the nature of the functional relations, and in the depth of considered economic terms and phenomena. The methodology of outsourcing is considered as a part of methodology of cooperation of enterprise innovative structures of being formed sector of knowledge economy

  16. Transparent Guideline Methodology Needed

    DEFF Research Database (Denmark)

    Lidal, Ingeborg; Norén, Camilla; Mäkelä, Marjukka

    2013-01-01

    As part of learning at the Nordic Workshop of Evidence-based Medicine, we have read with interest the practice guidelines for central venous access, published in your Journal in 2012.1 We appraised the quality of this guideline using the checklist developed by The Evidence-Based Medicine Working ...... are based on best currently available evidence. Our concerns are in two main categories: the rigor of development, including methodology of searching, evaluating, and combining the evidence; and editorial independence, including funding and possible conflicts of interest....... Group.2 Similar criteria for guideline quality have been suggested elsewhere.3 Our conclusion was that this much needed guideline is currently unclear about several aspects of the methodology used in developing the recommendations. This means potential users cannot be certain that the recommendations...

  17. Soft Systems Methodology

    Science.gov (United States)

    Checkland, Peter; Poulter, John

    Soft systems methodology (SSM) is an approach for tackling problematical, messy situations of all kinds. It is an action-oriented process of inquiry into problematic situations in which users learn their way from finding out about the situation, to taking action to improve it. The learning emerges via an organised process in which the situation is explored using a set of models of purposeful action (each built to encapsulate a single worldview) as intellectual devices, or tools, to inform and structure discussion about a situation and how it might be improved. This paper, written by the original developer Peter Checkland and practitioner John Poulter, gives a clear and concise account of the approach that covers SSM's specific techniques, the learning cycle process of the methodology and the craft skills which practitioners develop. This concise but theoretically robust account nevertheless includes the fundamental concepts, techniques, core tenets described through a wide range of settings.

  18. Tobacco documents research methodology.

    Science.gov (United States)

    Anderson, Stacey J; McCandless, Phyra M; Klausner, Kim; Taketa, Rachel; Yerger, Valerie B

    2011-05-01

    Tobacco documents research has developed into a thriving academic enterprise since its inception in 1995. The technology supporting tobacco documents archiving, searching and retrieval has improved greatly since that time, and consequently tobacco documents researchers have considerably more access to resources than was the case when researchers had to travel to physical archives and/or electronically search poorly and incompletely indexed documents. The authors of the papers presented in this supplement all followed the same basic research methodology. Rather than leave the reader of the supplement to read the same discussion of methods in each individual paper, presented here is an overview of the methods all authors followed. In the individual articles that follow in this supplement, the authors present the additional methodological information specific to their topics. This brief discussion also highlights technological capabilities in the Legacy Tobacco Documents Library and updates methods for organising internal tobacco documents data and findings.

  19. Land evaluation methodology

    OpenAIRE

    Lustig, Thomas

    1998-01-01

    This paper reviews non-computerised and computerised land evaluation methods or methodologies, and realises the difficulties to incorporate biophysical and socioeconomic factors from different levels. Therefore, this paper theorises an alternative land evaluation approach, which is tested and elaborated in an agricultural community in the North of Chile. The basis of the approach relies on holistic thinking and attempts to evaluate the potential for improving assumed unsustainable goat manage...

  20. Pipeline ADC Design Methodology

    OpenAIRE

    Zhao, Hui

    2012-01-01

    Demand for high-performance analog-to-digital converter (ADC) integrated circuits (ICs) with optimal combined specifications of resolution, sampling rate and power consumption becomes dominant due to emerging applications in wireless communications, broad band transceivers, digital-intermediate frequency (IF) receivers and countless of digital devices. This research is dedicated to develop a pipeline ADC design methodology with minimum power dissipation, while keeping relatively high speed an...

  1. Albert Einstein's Methodology

    OpenAIRE

    Weinstein, Galina

    2012-01-01

    This paper discusses Einstein's methodology. 1. Einstein characterized his work as a theory of principle and reasoned that beyond kinematics, the 1905 heuristic relativity principle could offer new connections between non-kinematical concepts. 2. Einstein's creativity and inventiveness and process of thinking; invention or discovery. 3. Einstein considered his best friend Michele Besso as a sounding board and his class-mate from the Polytechnic Marcel Grossman, as his active partner. Yet, Ein...

  2. Pipeline ADC Design Methodology

    OpenAIRE

    Zhao, Hui

    2012-01-01

    Demand for high-performance analog-to-digital converter (ADC) integrated circuits (ICs) with optimal combined specifications of resolution, sampling rate and power consumption becomes dominant due to emerging applications in wireless communications, broad band transceivers, digital-intermediate frequency (IF) receivers and countless of digital devices. This research is dedicated to develop a pipeline ADC design methodology with minimum power dissipation, while keeping relatively high speed an...

  3. Cross-Wire Calibration for Freehand 3D Ultrasonography: Measurement and Numerical Issues

    Directory of Open Access Journals (Sweden)

    J. Jan

    2005-06-01

    Full Text Available 3D freehand ultrasound is an imaging technique, which is graduallyfinding clinical applications. A position sensor is attached to aconventional ultrasound probe, so that B-scans are acquired along withtheir relative locations. This allows the B-scans to be inserted into a3D regular voxel array, which can then be visualized usingarbitrary-plane slicing, and volume or surface rendering. A keyrequirement for correct reconstruction is the calibration: determiningthe position and orientation of the B-scans with respect to theposition sensor's receiver. Following calibration, interpolation in theset of irregularly spaced B-scans is required to reconstruct aregular-voxel array. This text describes a freehand measurement of 2Dultrasonic data, an approach to the calibration problem and severalnumerical issues concerned with the calibration and reconstruction.

  4. A universal calibration function for determination of soil moisture with cosmic-ray neutrons

    Directory of Open Access Journals (Sweden)

    T. E. Franz

    2012-09-01

    Full Text Available A cosmic-ray soil moisture probe is usually calibrated locally using soil samples collected within its support volume. But such calibration may be difficult or impractical, for example when soil contains stones, in presence of bedrock outcrops, in urban environments, or when the probe is used as a rover. Here we use the neutron transport code MCNPx with observed soil chemistries and pore water distribution to derive a universal calibration function to be used in such environments. Comparisons with independent soil moisture measurements at one cosmic-ray probe site and, separately, at thirty-five sites, show that the universal calibration function explains more than 75% of the total variation within each dataset, permitting accurate isolation of the soil moisture signal from the measured neutron signal.

  5. Input calibration for negative originals

    Science.gov (United States)

    Tuijn, Chris

    1995-04-01

    One of the major challenges in the prepress environment consists of controlling the electronic color reproduction process such that a perfect match of any original can be realized. Whether this goal can be reached depends on many factors such as the dynamic range of the input device (scanner, camera), the color gamut of the output device (dye sublimation printer, ink-jet printer, offset), the color management software etc. The characterization of the color behavior of the peripheral devices is therefore very important. Photographs and positive transparents reflect the original scene pretty well; for negative originals, however, there is no obvious link to either the original scene or a particular print of the negative under consideration. In this paper, we establish a method to scan negatives and to convert the scanned data to a calibrated RGB space, which is known colorimetrically. This method is based on the reconstruction of the original exposure conditions (i.e., original scene) which generated the negative. Since the characteristics of negative film are quite diverse, a special calibration is required for each combination of scanner and film type.

  6. Calibration of atmospheric hydrogen measurements

    Directory of Open Access Journals (Sweden)

    A. Jordan

    2011-03-01

    Full Text Available Interest in atmospheric hydrogen (H2 has been growing in recent years with the prospect of H2 being a potential alternative to fossil fuels as an energy carrier. This has intensified research for a quantitative understanding of the atmospheric hydrogen cycle and its total budget, including the expansion of the global atmospheric measurement network. However, inconsistencies in published observational data constitute a major limitation in exploring such data sets. The discrepancies can be mainly attributed to difficulties in the calibration of the measurements. In this study various factors that may interfere with accurate quantification of atmospheric H2 were investigated including drifts of standard gases in high pressure cylinders. As an experimental basis a procedure to generate precise mixtures of H2 within the atmospheric concentration range was established. Application of this method has enabled a thorough linearity characterization of the commonly used GC-HgO reduction detector. We discovered that the detector response was sensitive to the composition of the matrix gas. Addressing these systematic errors, a new calibration scale has been generated defined by thirteen standards with dry air mole fractions ranging from 139–1226 nmol mol−1. This new scale has been accepted as the official World Meteorological Organisation's (WMO Global Atmospheric Watch (GAW H2 mole fraction scale.

  7. Crop physiology calibration in CLM

    Directory of Open Access Journals (Sweden)

    I. Bilionis

    2014-10-01

    Full Text Available Farming is using more terrestrial ground, as population increases and agriculture is increasingly used for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurements of gross primary productivity and net ecosystem exchange from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC.

  8. Respiratory inductance plethysmography in healthy infants: a comparison of three calibration methods.

    Science.gov (United States)

    Poole, K A; Thompson, J R; Hallinan, H M; Beardsmore, C S

    2000-12-01

    Respiratory inductance plethysmography (RIP) measures respiration from body surface movements. Various techniques have been proposed for calibration in order that RIP may be used quantitatively. These include calculation of the proportionality constant of ribcage to abdominal volume change (K). The aims of this study were to 1) establish whether a fixed value of K could be used for calibration, and 2) compare this technique with multiple linear regression (MLR) and qualitative diagnostic calibration (QDC) in normal healthy infants. Recordings of pneumotachograph (PNT) flow and RIP were made during quiet (QS) and active sleep (AS) in 12 infants. The first 5 min in a sleep state were used to calculate calibration factors, which were applied to subsequent validation data. The absolute percentage error between RIP and PNT tidal volumes was calculated. The percentage error was similar over a wide range of K during QS. However, K became more critical when breathing was out of phase. A standard for K of 0.5 was chosen. There was good agreement between calibration methods during QS and AS. In the first minute following calibration during QS, the mean absolute errors were 3.5, 4.1 and 5.3% for MLR, QDC and fixed K respectively. The equivalent errors in AS were 11.5, 13.1 and 13.7% respectively. The simple fixed ratio method can be used to measure tidal volume with similar accuracy to multiple linear regression and qualitative diagnostic calibration in healthy unsedated sleeping infants, although it remains to be validated in other groups of infants, such as those with respiratory disease.

  9. Mass flow-rate control unit to calibrate hot-wire sensors

    Energy Technology Data Exchange (ETDEWEB)

    Durst, F.; Uensal, B. [FMP Technology GmbH, Erlangen (Germany); Haddad, K. [FMP Technology GmbH, Erlangen (Germany); Friedrich-Alexander-Universitaet Erlangen-Nuernberg, LSTM-Erlangen, Institute of Fluid Mechanics, Erlangen (Germany); Al-Salaymeh, A.; Eid, Shadi [University of Jordan, Mechanical Engineering Department, Faculty of Engineering and Technology, Amman (Jordan)

    2008-02-15

    Hot-wire anemometry is a measuring technique that is widely employed in fluid mechanics research to study the velocity fields of gas flows. It is general practice to calibrate hot-wire sensors against velocity. Calibrations are usually carried out under atmospheric pressure conditions and these suggest that the wire is sensitive to the instantaneous local volume flow rate. It is pointed out, however, that hot wires are sensitive to the instantaneous local mass flow rate and, of course, also to the gas heat conductivity. To calibrate hot wires with respect to mass flow rates per unit area, i.e., with respect to ({rho}U), requires special calibration test rigs. Such a device is described and its application is summarized within the ({rho}U) range 0.1-25 kg/m{sup 2} s. Calibrations are shown to yield the same hot-wire response curves for density variations in the range 1-7 kg/m{sup 3}. The application of the calibrated wires to measure pulsating mass flows is demonstrated, and suggestions are made for carrying out extensive calibrations to yield the ({rho}U) wire response as a basis for advanced fluid mechanics research on ({rho}U) data in density-varying flows. (orig.)

  10. Third COS FUV Lifetime Calibration Program: Flatfield and Flux Calibrations

    Science.gov (United States)

    Debes, J. H.; Becker, G.; Roman-Duval, J.; Ely, J.; Massa, D.; Oliveira, C.; Plesha, R.; Proffitt, C.; Taylor, J.

    2016-10-01

    As part of the calibration of the third lifetime position (LP3) of the Cosmic Origins Spectrograph (COS) Far-Ultraviolet (FUV) detector, observations of WD 0308-565 were obtained with the G130M, G160M, and G140L gratings and observations of GD 71 were obtained in the G160M grating through the Point Source Aperture (PSA) to derive low-order flatfields (L-flats) and sensitivities at LP3. Observations were executed for all CENWAVES and all FP-POS with the exception of G130M/1055 and G130M/1096, which remained at LP2. The derivation of the L-flats and sensitivities at LP3 differed from their LP1 and LP2 counterparts in a few key ways, which we describe in this report. Firstly, we quantified a cut-off in spatial frequency that we assigned to the L-flats. Secondly, we derived a new method for simultaneously fitting both the L-flats, pixel-to-pixel flats (P-flats), and sensitvities which we compared to our previous method of separately fitting L-flats and sensitivities. These new methods produce comparable results, but provide us with an external test on the robustness of each approach individually. The results of our work show that with the new profile extraction routines, sensitivities, and L-flats, the relative and absolute flux calibration accuracies (1% and 2% respectively) at LP3 are slightly improved relative to previous locations on the COS FUV detector.

  11. Directional reflectance characterization facility and measurement methodology

    Science.gov (United States)

    McGuckin, B. T.; Haner, D. A.; Menzies, R. T.; Esproles, C.; Brothers, A. M.

    1996-08-01

    A precision reflectance characterization facility, constructed specifically for the measurement of the bidirectional reflectance properties of Spectralon panels planned for use as in-flight calibrators on the NASA Multiangle Imaging Spectroradiometer (MISR) instrument is described. The incident linearly polarized radiation is provided at three laser wavelengths: 442, 632.8, and 859.9 nm. Each beam is collimated when incident on the Spectralon. The illuminated area of the panel is viewed with a silicon photodetector that revolves around the panel (360 ) on a 30-cm boom extending from a common rotational axis. The reflected radiance detector signal is ratioed with the signal from a reference detector to minimize the effect of amplitude instabilities in the laser sources. This and other measures adopted to reduce noise have resulted in a bidirectional reflection function (BRF) calibration facility with a measurement precision with regard to a BRF measurement of 0.002 at the 1 confidence level. The Spectralon test piece panel is held in a computer-controlled three-axis rotational assembly capable of a full 360 rotation in the horizontal plane and 90 in the vertical. The angular positioning system has repeatability and resolution of 0.001 . Design details and an outline of the measurement methodology are presented.

  12. Chemometric modelling based on 2D-fluorescence spectra without a calibration measurement.

    Science.gov (United States)

    Solle, D; Geissler, D; Stärk, E; Scheper, T; Hitzmann, B

    2003-01-22

    2D fluorescence spectra provide information from intracellular compounds. Fluorophores like trytophan, tyrosine and phenylalanin as well as NADH and flavins make the corresponding measurement systems very important for bioprocess supervision and control. The evaluation is usually based on chemometric modelling using for their calibration procedure off-line measurements of the desired process variables. Due to the data driven approach lots of off-line measurements are required. Here a methodology is presented, which enables to perform a calibration procedure of chemometric models without any further measurement. The necessary information for the calibration procedure is provided by means of the a priori knowledge about the process, i.e. a mathematical model, whose model parameters are estimated during the calibration procedure, as well as the fact that the substrate should be consumed at the end of the process run. The new methodology for chemometric calibration is applied for a batch cultivation of aerobically grown S. cerevisiae on the glucose Schatzmann medium. As will be presented the chemometric models, which are determined by this method, can be used for prediction during new process runs. The MATHLAB routine is free available on request from the authors.

  13. Calibration Monitor for Dark Energy Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, M. E.

    2009-11-23

    The goal of this program was to design, build, test, and characterize a flight qualified calibration source and monitor for a Dark Energy related experiment: ACCESS - 'Absolute Color Calibration Experiment for Standard Stars'. This calibration source, the On-board Calibration Monitor (OCM), is a key component of our ACCESS spectrophotometric calibration program. The OCM will be flown as part of the ACCESS sub-orbital rocket payload in addition to monitoring instrument sensitivity on the ground. The objective of the OCM is to minimize systematic errors associated with any potential changes in the ACCESS instrument sensitivity. Importantly, the OCM will be used to monitor instrument sensitivity immediately after astronomical observations while the instrument payload is parachuting to the ground. Through monitoring, we can detect, track, characterize, and thus correct for any changes in instrument senstivity over the proposed 5-year duration of the assembled and calibrated instrument.

  14. Herschel SPIRE FTS Relative Spectral Response Calibration

    CERN Document Server

    Fulton, Trevor; Baluteau, Jean-Paul; Benielli, Dominique; Imhof, Peter; Lim, Tanya; Lu, Nanyao; Marchili, Nicola; Naylor, David; Polehampton, Edward; Swinyard, Bruce; Valtchanov, Ivan

    2014-01-01

    Herschel/SPIRE Fourier transform spectrometer (FTS) observations contain emission from both the Herschel Telescope and the SPIRE Instrument itself, both of which are typically orders of magnitude greater than the emission from the astronomical source, and must be removed in order to recover the source spectrum. The effects of the Herschel Telescope and the SPIRE Instrument are removed during data reduction using relative spectral response calibration curves and emission models. We present the evolution of the methods used to derive the relative spectral response calibration curves for the SPIRE FTS. The relationship between the calibration curves and the ultimate sensitivity of calibrated SPIRE FTS data is discussed and the results from the derivation methods are compared. These comparisons show that the latest derivation methods result in calibration curves that impart a factor of between 2 and 100 less noise to the overall error budget, which results in calibrated spectra for individual observations whose n...

  15. New method to calibrate a spinner anemometer

    DEFF Research Database (Denmark)

    Demurtas, Giorgio; Friis Pedersen, Troels

    2014-01-01

    The spinner anemometer is a wind sensor, based on three one dimensional sonic sensor probes, mounted on the wind turbine spinner, and an algorithm to convert the wind speeds measured by the three sonic sensors to horizontal wind speed, yaw misalignment and flow inclination angle. The conversion...... to be stopped during calibration in order for the rotor induction not to influence on the calibration, so that the spinner anemometer measures ”free” wind values in stopped condition. The calibration of flow angle measurements is made by calibration of the ratio of the two algorithm constants k2=k1 = k......_. The calibration of k_ is made by relating the spinner anemometer yaw misalignment measurements to the yaw position when yawing the wind turbine in and out of the wind several times. The calibration of the constant k1 is made by comparing the spinner anemometer wind speed measurement with a free metmast or lidar...

  16. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  17. New method to calibrate a spinner anemometer

    DEFF Research Database (Denmark)

    Demurtas, Giorgio; Friis Pedersen, Troels

    2014-01-01

    The spinner anemometer is a wind sensor, based on three one dimensional sonic sensor probes, mounted on the wind turbine spinner, and an algorithm to convert the wind speeds measured by the three sonic sensors to horizontal wind speed, yaw misalignment and flow inclination angle. The conversion...... to be stopped during calibration in order for the rotor induction not to influence on the calibration, so that the spinner anemometer measures ”free” wind values in stopped condition. The calibration of flow angle measurements is made by calibration of the ratio of the two algorithm constants k2=k1 = k......_. The calibration of k_ is made by relating the spinner anemometer yaw misalignment measurements to the yaw position when yawing the wind turbine in and out of the wind several times. The calibration of the constant k1 is made by comparing the spinner anemometer wind speed measurement with a free metmast or lidar...

  18. GIFTS SM EDU Radiometric and Spectral Calibrations

    Science.gov (United States)

    Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.

  19. Biogeographic calibrations for the molecular clock.

    Science.gov (United States)

    Ho, Simon Y W; Tong, K Jun; Foster, Charles S P; Ritchie, Andrew M; Lo, Nathan; Crisp, Michael D

    2015-09-01

    Molecular estimates of evolutionary timescales have an important role in a range of biological studies. Such estimates can be made using methods based on molecular clocks, including models that are able to account for rate variation across lineages. All clock models share a dependence on calibrations, which enable estimates to be given in absolute time units. There are many available methods for incorporating fossil calibrations, but geological and climatic data can also provide useful calibrations for molecular clocks. However, a number of strong assumptions need to be made when using these biogeographic calibrations, leading to wide variation in their reliability and precision. In this review, we describe the nature of biogeographic calibrations and the assumptions that they involve. We present an overview of the different geological and climatic events that can provide informative calibrations, and explain how such temporal information can be incorporated into dating analyses.

  20. Calibration of surface temperature on rocky exoplanets

    Science.gov (United States)

    Kashyap Jagadeesh, Madhu

    2016-07-01

    Study of exoplanets and the search for life elsewhere has been a very fascinating area in recent years. Presently, lots of efforts have been channelled in this direction in the form of space exploration and the ultimate search for the habitable planet. One of the parametric methods to analyse the data available from the missions such as Kepler, CoRoT, etc, is the Earth Similarity Index (ESI), defined as a number between zero (no similarity) and one (identical to Earth), introduced to assess the Earth likeness of exoplanets. A multi-parameter ESI scale depends on the radius, density, escape velocity and surface temperature of exoplanets. Our objective is to establish how exactly the individual parameters, entering the interior ESI and surface ESI, are contributing to the global ESI, using the graphical analysis. Presently, the surface temperature estimates are following a correction factor of 30 K, based on the Earth's green-house effect. The main objective of this work in calculations of the global ESI using the HabCat data is to introduce a new method to better estimate the surface temperature of exoplanets, from theoretical formula with fixed albedo factor and emissivity (Earth values). From the graphical analysis of the known data for the Solar System objects, we established the calibration relation between surface and equilibrium temperatures for the Solar System objects. Using extrapolation we found that the power function is the closest description of the trend to attain surface temperature. From this we conclude that the correction term becomes very effective way to calculate the accurate value of the surface temperature, for further analysis with our graphical methodology.

  1. Transient Inverse Calibration of Hanford Site-Wide Groundwater Model to Hanford Operational Impacts - 1943 to 1996

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Charles R.; Bergeron, Marcel P.; Wurstner, Signe K.; Thorne, Paul D.; Orr, Samuel; Mckinley, Mathew I.

    2001-05-31

    This report describes a new initiative to strengthen the technical defensibility of predictions made with the Hanford site-wide groundwater flow and transport model. The focus is on characterizing major uncertainties in the current model. PNNL will develop and implement a calibration approach and methodology that can be used to evaluate alternative conceptual models of the Hanford aquifer system. The calibration process will involve a three-dimensional transient inverse calibration of each numerical model to historical observations of hydraulic and water quality impacts to the unconfined aquifer system from Hanford operations since the mid-1940s.

  2. Transient Inverse Calibration of Hanford Site-Wide Groundwater Model to Hanford Operational Impacts - 1943 to 1996

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Charles R.; Bergeron, Marcel P.; Wurstner, Signe K.; Thorne, Paul D.; Orr, Samuel; Mckinley, Mathew I.

    2001-05-31

    This report describes a new initiative to strengthen the technical defensibility of predictions made with the Hanford site-wide groundwater flow and transport model. The focus is on characterizing major uncertainties in the current model. PNNL will develop and implement a calibration approach and methodology that can be used to evaluate alternative conceptual models of the Hanford aquifer system. The calibration process will involve a three-dimensional transient inverse calibration of each numerical model to historical observations of hydraulic and water quality impacts to the unconfined aquifer system from Hanford operations since the mid-1940s.

  3. First Demonstration of ECHO: an External Calibrator for Hydrogen Observatories

    Science.gov (United States)

    Jacobs, Daniel C.; Burba, Jacob; Bowman, Judd D.; Neben, Abraham R.; Stinnett, Benjamin; Turner, Lauren; Johnson, Kali; Busch, Michael; Allison, Jay; Leatham, Marc; Serrano Rodriguez, Victoria; Denney, Mason; Nelson, David

    2017-03-01

    Multiple instruments are pursuing constraints on dark energy, observing reionization and opening a window on the dark ages through the detection and characterization of the 21 cm hydrogen line for redshifts ranging from ∼1 to 25. These instruments, including CHIME in the sub-meter and HERA in the meter bands, are wide-field arrays with multiple-degree beams, typically operating in transit mode. Accurate knowledge of their primary beams is critical for separation of bright foregrounds from the desired cosmological signals, but difficult to achieve through astronomical observations alone. Previous beam calibration work at low frequencies has focused on model verification and does not address the need of 21 cm experiments for routine beam mapping, to the horizon, of the as-built array. We describe the design and methodology of a drone-mounted calibrator, the External Calibrator for Hydrogen Observatories (ECHO), that aims to address this need. We report on a first set of trials to calibrate low-frequency dipoles at 137 MHz and compare ECHO measurements to an established beam-mapping system based on transmissions from the Orbcomm satellite constellation. We create beam maps of two dipoles at a 9° resolution and find sample noise ranging from 1% at the zenith to 100% in the far sidelobes. Assuming this sample noise represents the error in the measurement, the higher end of this range is not yet consistent with the desired requirement but is an improvement on Orbcomm. The overall performance of ECHO suggests that the desired precision and angular coverage is achievable in practice with modest improvements. We identify the main sources of systematic error and uncertainty in our measurements and describe the steps needed to overcome them.

  4. Calibration of acoustic sensors in ice using the reciprocity method

    Energy Technology Data Exchange (ETDEWEB)

    Meures, Thomas; Bissok, Martin; Laihem, Karim; Paul, Larissa; Wiebusch, Christopher; Zierke, Simon [III. Physikalisches Institut, RWTH Aachen (Germany); Semburg, Benjamin [Bergische Universitaet Wuppertal (Germany). Fachbereich C

    2010-07-01

    Within the IceCube experiment at the South Pole an R and D program investigates new ways of ultra high energy neutrino detection. In particular when aiming for detector volumes of the order of 100 km{sup 3} acoustic or radio detectors are promising approaches. The acoustic detection method relies on the thermo-acoustic effect occurring when high energetic particles interact and deposit heat within a detection medium. This effect is investigated in the Aachen Acoustic Laboratory (AAL). The high energy particle interaction is simulated by a powerful pulsed Nd:YAG LASER shooting into a 3m{sup 3} tank of clear ice (or water). Eighteen acoustic sensors are situated on three rings in different depths and record the generated signals. These sensors serve as reference for later measurements of other devices. The reciprocity method, used for the absolute calibration of these sensors, is independent of an absolutely calibrated reference. This method and its application to the calibration of the AAL sensors are presented and first results are shown.

  5. Can we calibrate simultaneously groundwater recharge and aquifer hydrodynamic parameters ?

    Science.gov (United States)

    Hassane Maina, Fadji; Ackerer, Philippe; Bildstein, Olivier

    2017-04-01

    By groundwater model calibration, we consider here fitting the measured piezometric heads by estimating the hydrodynamic parameters (storage term and hydraulic conductivity) and the recharge. It is traditionally recommended to avoid simultaneous calibration of groundwater recharge and flow parameters because of correlation between recharge and the flow parameters. From a physical point of view, little recharge associated with low hydraulic conductivity can provide very similar piezometric changes than higher recharge and higher hydraulic conductivity. If this correlation is true under steady state conditions, we assume that this correlation is much weaker under transient conditions because recharge varies in time and the parameters do not. Moreover, the recharge is negligible during summer time for many climatic conditions due to reduced precipitation, increased evaporation and transpiration by vegetation cover. We analyze our hypothesis through global sensitivity analysis (GSA) in conjunction with the polynomial chaos expansion (PCE) methodology. We perform GSA by calculating the Sobol indices, which provide a variance-based 'measure' of the effects of uncertain parameters (storage and hydraulic conductivity) and recharge on the piezometric heads computed by the flow model. The choice of PCE has the following two benefits: (i) it provides the global sensitivity indices in a straightforward manner, and (ii) PCE can serve as a surrogate model for the calibration of parameters. The coefficients of the PCE are computed by probabilistic collocation. We perform the GSA on simplified real conditions coming from an already built groundwater model dedicated to a subdomain of the Upper-Rhine aquifer (geometry, boundary conditions, climatic data). GSA shows that the simultaneous calibration of recharge and flow parameters is possible if the calibration is performed over at least one year. It provides also the valuable information of the sensitivity versus time, depending on

  6. Metrology of the radon in air volume activity at the italian radon reference chamber

    Energy Technology Data Exchange (ETDEWEB)

    Sciocchetti, G.; Cotellessa, G.; Soldano, E.; Pagliari, M. [Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti, ENEA Centro Ricerche Casaccia Roma (Italy)

    2006-07-01

    The approach of the Italian National Institute of Ionising Radiations (I.N.M.R.I.-ENEA) on radon metrology has been based on a complete and integrated system which can be used to calibrate the main types of {sup 222}Rn in air measuring instruments with international traceability. The Italian radon reference chamber is a research and calibration facility developed at the Casaccia Research Center in Roma. This facility has an inner volume of one m{sup 3}. The wall is a cylindrical stainless steel vessel coupled with an automated climate apparatus operated both at steady and dynamic conditions. The control and data acquisition equipment is based on Radotron system, developed to automate the multitasking management of different sets of radon monitors and climatic sensors. A novel approach for testing passive radon monitors with an alpha track detector exposure standard has been developed. It is based on the direct measurement of radon exposure with a set of passive integrating monitors based on the new ENEA piston radon exposure meter. This paper describes the methodological approach on radon metrology, the status-of-art of experimental apparatus and the standardization procedures. (authors)

  7. Radiocarbon calibration - past, present and future

    Energy Technology Data Exchange (ETDEWEB)

    Plicht, J. van der E-mail: plicht@phys.rug.nl

    2004-08-01

    Calibration of the Radiocarbon timescale is traditionally based on tree-rings dated by dendrochronology. At present, the tree-ring curve dates back to about 9900 BC. Beyond this limit, marine datasets extend the present calibration curve INTCAL98 to about 15 600 years ago. Since 1998, a wealth of AMS measurements became available, covering the complete {sup 14}C dating range. No calibration curve can presently be recommended for the older part of the dating range until discrepancies are resolved.

  8. Calibration Procedure for 3D Turning Dynamometer

    DEFF Research Database (Denmark)

    Axinte, Dragos Aurelian; Belluci, Walter

    1999-01-01

    The aim of the static calibration of the dynamometer is to obtain the matrix for evaluating cutting forces through the output voltage of the piezoelectric cells and charge amplifiers. In the same time, it is worth to evaluate the linearity of the dependencies between applied forces and output...... of the piezoelectric cells;5. Mounting of the dynamometer;6. Calibration of the dynamometer;7. Data analysis;8. Uncertainty budget of the calibration....

  9. Calibration of Avent Wind IRIS SN 01030167

    DEFF Research Database (Denmark)

    Courtney, Michael

    This report presents the result of the lidar calibration performed for a two-beam nacelle based lidar at DTU’s test site for large wind turbines at Høvsøre, Denmark. Calibration is here understood as the establishment of a relation between the reference wind speed measurements with measurement...... uncertainties provided by measurement standard and corresponding lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements....

  10. Calibration of Nacelle-based Lidar instrument

    DEFF Research Database (Denmark)

    Yordanova, Ginka; Courtney, Michael

    This report presents the result of the lidar calibration performed for a two-beam nacelle based lidar at DTU’s test site for large wind turbines at Høvsøre, Denmark. Calibration is here understood as the establishment of a relation between the reference wind speed measurements with measurement...... uncertainties provided by measurement standard and corresponding lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements....

  11. Calibration of Nacelle-based Lidar instrument

    DEFF Research Database (Denmark)

    Georgieva Yankova, Ginka; Courtney, Michael

    This report presents the result of the lidar calibration performed for a four-beam nacelle based lidar at DTU’s test site for large wind turbines at Høvsøre, Denmark.Denmark. Calibration is here understood as the establishment of a relation between the reference wind speed measurements...... with measurement uncertainties provided by measurement standard and corresponding lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements....

  12. Calibration biases in logical reasoning tasks

    OpenAIRE

    Guillermo Macbeth; Alfredo López Alonso; Eugenia Razumiejczyk; Rodrigo Sosa; Carolina Pereyra; Humberto Fernández

    2013-01-01

    The aim of this contribution is to present an experimental study about calibration in deductive reasoning tasks. Calibration is defi ned as the empirical convergence or divergence between the objective and the subjective success. The underconfi dence bias is understood as the dominance of the former over the latter. The hypothesis of this study states that the form of the propositions presented in the experiment is critical for calibration phenomena. Affi rmative and negative propositions are...

  13. 1987 calibration of the TFTR neutron spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, C.W.; Strachan, J.D. (Los Alamos National Lab., NM (USA); Princeton Univ., NJ (USA). Plasma Physics Lab.)

    1989-12-01

    The {sup 3}He neutron spectrometer used for measuring ion temperatures and the NE213 proton recoil spectrometer used for triton burnup measurements were absolutely calibrated with DT and DD neutron generators placed inside the TFTR vacuum vessel. The details of the detector response and calibration are presented. Comparisons are made to the neutron source strengths measured from other calibrated systems. 23 refs., 19 figs., 6 tabs.

  14. Calibration Procedure for 3D Turning Dynamometer

    DEFF Research Database (Denmark)

    Axinte, Dragos Aurelian; Belluci, Walter

    1999-01-01

    The aim of the static calibration of the dynamometer is to obtain the matrix for evaluating cutting forces through the output voltage of the piezoelectric cells and charge amplifiers. In the same time, it is worth to evaluate the linearity of the dependencies between applied forces and output...... of the piezoelectric cells;5. Mounting of the dynamometer;6. Calibration of the dynamometer;7. Data analysis;8. Uncertainty budget of the calibration....

  15. Observatory Magnetometer In-Situ Calibration

    Directory of Open Access Journals (Sweden)

    A Marusenkov

    2011-07-01

    Full Text Available An experimental validation of the in-situ calibration procedure, which allows estimating parameters of observatory magnetometers (scale factors, sensor misalignment without its operation interruption, is presented. In order to control the validity of the procedure, the records provided by two magnetometers calibrated independently in a coil system have been processed. The in-situ estimations of the parameters are in very good agreement with the values provided by the coil system calibration.

  16. Optical Calibration For Jefferson Lab HKS Spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    L. Yuan; L. Tang

    2005-11-04

    In order to accept very forward angle scattering particles, Jefferson Lab HKS experiment uses an on-target zero degree dipole magnet. The usual spectrometer optics calibration procedure has to be modified due to this on-target field. This paper describes a new method to calibrate HKS spectrometer system. The simulation of the calibration procedure shows the required resolution can be achieved from initially inaccurate optical description.

  17. Calibration strategy of CMS electromagnetic calorimeter

    CERN Document Server

    Paramatti, R

    2004-01-01

    Calibration is one of the main factors that set limits on the ultimate performance of the CMS electromagnetic calorimeter at LHC. Crystals raw intercalibration from lab measurements during assembly and CERN-SPS test beam of Supermodules will represent the precalibration at the start-up. In situ calibration with physics events will be the main tool to reduce the constant term to the design goal of 0.5%. The calibration strategy will be described in detail.

  18. ATLAS FCal Diagnostics using the Calibration Pulse

    CERN Document Server

    Rutherfoord, J

    2004-01-01

    The calibration pulser in the ATLAS Forward Calorimeter electronics is used to 1) directly calibrate the warm, active electronics and 2) diagnose the cold, passive electronics chain all the way to the liquid argon electrodes. The study presented here shows that reflections of the calibration pulse coming from discontinuities located at or between the warm preamplifier and the electrode can differentiate and identify all known defects so far observed in this chain.

  19. Evolving Intelligent Systems Methodology and Applications

    CERN Document Server

    Angelov, Plamen; Kasabov, Nik

    2010-01-01

    From theory to techniques, the first all-in-one resource for EIS. There is a clear demand in advanced process industries, defense, and Internet and communication (VoIP) applications for intelligent yet adaptive/evolving systems. Evolving Intelligent Systems is the first self- contained volume that covers this newly established concept in its entirety, from a systematic methodology to case studies to industrial applications. Featuring chapters written by leading world experts, it addresses the progress, trends, and major achievements in this emerging research field, with a strong emphasis on th

  20. Fractured reservoir modeling: From well data to dynamic flow. Methodology and application to a real case study in Illizi Basin (Algeria)

    Science.gov (United States)

    Felici, Fabrizio; Alemanni, Annalisa; Bouacida, Djamil; de Montleau, Pierre

    2016-10-01

    Fault arrays and natural fractures distribution strongly influence subsurface fluids migration, trapping and production. It is critical to develop methodologies that can be used to accurately characterize reservoir volumes as present-day exploration and appraisal campaigns become increasingly focused on tight or low porosity reservoirs. A common method used to model the distribution and intensity of subsurface fracture sets is the Discrete Fracture Network (DFN) technique. Shortcomings of the DFN technique include the evaluation of fracture attributes, computational aspects in the case of large fields, and most importantly issues related to upscaling. Thus, the aim of this work is to present a simplified methodology for fractured reservoir characterization based on the distribution of fracture intensity as a continuous property. Fracture intensity was calculated from image well-logs data and then distributed in the reservoir according to specific fracture drivers. The case study is related to a large appraisal gas field located in the Illizi Basin, Southern Algeria, where Late Ordovician glacial deposits are the primary reservoir levels, in which the presence of faults and fractures strongly enhance well performances. The final fracture intensity model was obtained by implementing a workflow in a commonly used commercial geomodeling software and calibrated by means of well test data analysis. The implemented methodology is a useful tool for large fractured reservoir characterization when DFN technique is hardly applicable for computational reasons or the level of uncertainty does not support a performing discrete analysis.