WorldWideScience

Sample records for instrument framing camera

  1. Solid-state framing camera with multiple time frames

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  2. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  3. 100-ps framing-camera tube

    International Nuclear Information System (INIS)

    Kalibjian, R.

    1978-01-01

    The optoelectronic framing-camera tube described is capable of recording two-dimensional image frames with high spatial resolution in the <100-ps range. Framing is performed by streaking a two-dimensional electron image across narrow slits. The resulting dissected electron line images from the slits are restored into framed images by a restorer deflector operating synchronously with the dissector deflector. The number of framed images on the tube's viewing screen equals the number of dissecting slits in the tube. Performance has been demonstrated in a prototype tube by recording 135-ps-duration framed images of 2.5-mm patterns at the cathode. The limitation in the framing speed is in the external drivers for the deflectors and not in the tube design characteristics. Faster frame speeds in the <100-ps range can be obtained by use of faster deflection drivers

  4. Gain attenuation of gated framing camera

    International Nuclear Information System (INIS)

    Xiao Shali; Liu Shenye; Cao Zhurong; Li Hang; Zhang Haiying; Yuan Zheng; Wang Liwei

    2009-01-01

    The theoretic model of framing camera's gain attenuation is analyzed. The exponential attenuation curve of the gain along the pulse propagation time is simulated. An experiment to measure the coefficient of gain attenuation based on the gain attenuation theory is designed. Experiment result shows that the gain follows an exponential attenuation rule with a quotient of 0.0249 nm -1 , the attenuation coefficient of the pulse is 0.00356 mm -1 . The loss of the pulse propagation along the MCP stripline is the leading reason of gain attenuation. But in the figure of a single stripline, the gain dose not follow the rule of exponential attenuation completely, instead, there is a gain increase at the stripline bottom. That is caused by the reflection of the pulse. The reflectance is about 24.2%. Combining the experiment and theory, which design of the stripline MCP can improved the gain attenuation. (authors)

  5. Triggered streak and framing rotating-mirror cameras

    International Nuclear Information System (INIS)

    Huston, A.E.; Tabrar, A.

    1975-01-01

    A pulse motor has been developed which enables a mirror to be rotated to speeds in excess of 20,000 rpm with 10 -4 s. High-speed cameras of both streak and framing type have been assembled which incorporate this mirror drive, giving streak writing speeds up to 2,000ms -1 , and framing speeds up to 500,000 frames s -1 , in each case with the capability of triggering the camera from the event under investigation. (author)

  6. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  7. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  8. Framing-camera tube developed for sub-100-ps range

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    A new framing-camera tube, developed by Electronics Engineering, is capable of recording two-dimensional image frames with high spatial resolution in the sub-100-ps range. Framing is performed by streaking a two-dimensional electron image across narrow slits; the resulting electron-line images from the slits are restored into a framed image by a restorer deflector operating synchronously with the dissector deflector. We have demonstrated its performance in a prototype tube by recording 125-ps-duration framed images of 2.5-mm patterns. The limitation in the framing speed is in the external electronic drivers for the deflectors and not in the tube design characteristics. Shorter frame durations (below 100 ps) can be obtained by use of faster deflection drivers

  9. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  10. Cheetah: A high frame rate, high resolution SWIR image camera

    Science.gov (United States)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  11. 100ps UV/x-ray framing camera

    International Nuclear Information System (INIS)

    Eagles, R.T.; Freeman, N.J.; Allison, J.M.; Sibbett, W.; Sleat, W.E.; Walker, D.R.

    1988-01-01

    The requirement for a sensitive two-dimensional imaging diagnostic with picosecond time resolution, particularly in the study of laser-produced plasmas, has previously been discussed. A temporal sequence of framed images would provide useful supplementary information to that provided by time resolved streak images across a spectral region of interest from visible to x-ray. To fulfill this requirement the Picoframe camera system has been developed. Results pertaining to the operation of a camera having S20 photocathode sensitivity are reviewed and the characteristics of an UV/x-ray sensitive version of the Picoframe system are presented

  12. Ceres Photometry and Albedo from Dawn Framing Camera Images

    Science.gov (United States)

    Schröder, S. E.; Mottola, S.; Keller, H. U.; Li, J.-Y.; Matz, K.-D.; Otto, K.; Roatsch, T.; Stephan, K.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    The Dawn spacecraft is in orbit around dwarf planet Ceres. The onboard Framing Camera (FC) [1] is mapping the surface through a clear filter and 7 narrow-band filters at various observational geometries. Generally, Ceres' appearance in these images is affected by shadows and shading, effects which become stronger for larger solar phase angles, obscuring the intrinsic reflective properties of the surface. By means of photometric modeling we attempt to remove these effects and reconstruct the surface albedo over the full visible wavelength range. Knowledge of the albedo distribution will contribute to our understanding of the physical nature and composition of the surface.

  13. Imaging Asteroid 4 Vesta Using the Framing Camera

    Science.gov (United States)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface

  14. REFLECTANCE CALIBRATION SCHEME FOR AIRBORNE FRAME CAMERA IMAGES

    Directory of Open Access Journals (Sweden)

    U. Beisl

    2012-07-01

    Full Text Available The image quality of photogrammetric images is influenced by various effects from outside the camera. One effect is the scattered light from the atmosphere that lowers contrast in the images and creates a colour shift towards the blue. Another is the changing illumination during the day which results in changing image brightness within an image block. In addition, there is the so-called bidirectional reflectance of the ground (BRDF effects that is giving rise to a view and sun angle dependent brightness gradient in the image itself. To correct for the first two effects an atmospheric correction with reflectance calibration is chosen. The effects have been corrected successfully for ADS linescan sensor data by using a parametrization of the atmospheric quantities. Following Kaufman et al. the actual atmospheric condition is estimated by the brightness of a dark pixel taken from the image. The BRDF effects are corrected using a semi-empirical modelling of the brightness gradient. Both methods are now extended to frame cameras. Linescan sensors have a viewing geometry that is only dependent from the cross track view zenith angle. The difference for frame cameras now is to include the extra dimension of the view azimuth into the modelling. Since both the atmospheric correction and the BRDF correction require a model inversion with the help of image data, a different image sampling strategy is necessary which includes the azimuth angle dependence. For the atmospheric correction a sixth variable is added to the existing five variables visibility, view zenith angle, sun zenith angle, ground altitude, and flight altitude – thus multiplying the number of modelling input combinations for the offline-inversion. The parametrization has to reflect the view azimuth angle dependence. The BRDF model already contains the view azimuth dependence and is combined with a new sampling strategy.

  15. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  16. Development of an all-optical framing camera and its application on the Z-pinch.

    Science.gov (United States)

    Song, Yan; Peng, Bodong; Wang, Hong-Xing; Song, Guzhou; Li, Binkang; Yue, Zhiqin; Li, Yang; Sun, Tieping; Xu, Qing; Ma, Jiming; Sheng, Liang; Han, Changcai; Duan, Baojun; Yao, Zhiming; Yan, Weipeng

    2017-12-11

    An all-optical framing camera has been developed which measures the spatial profile of photons flux by utilizing a laser beam to probe the refractive index change in an indium phosphide semiconductor. This framing camera acquires two frames with the time resolution of about 1.5 ns and the inter frame separation time of about 13 ns by angularly multiplexing the probe beam on to the semiconductor. The spatial resolution of this camera has been estimated to be about 140 μm and the spectral response of this camera has also been theoretically investigated in 5 eV-100 KeV range. This camera has been applied in investigating the imploding dynamics of the molybdenum planar wire array Z-pinch on the 1-MA "QiangGuang-1" facility. This framing camera can provide an alternative scheme for high energy density physics experiments.

  17. Development and Performance of Bechtel Nevada's Nine-Frame Camera System

    International Nuclear Information System (INIS)

    S. A. Baker; M. J. Griffith; J. L. Tybo

    2002-01-01

    Bechtel Nevada, Los Alamos Operations, has developed a high-speed, nine-frame camera system that records a sequence from a changing or dynamic scene. The system incorporates an electrostatic image tube with custom gating and deflection electrodes. The framing tube is shuttered with high-speed gating electronics, yielding frame rates of up to 5MHz. Dynamic scenes are lens-coupled to the camera, which contains a single photocathode gated on and off to control each exposure time. Deflection plates and drive electronics move the frames to different locations on the framing tube output. A single charge-coupled device (CCD) camera then records the phosphor image of all nine frames. This paper discusses setup techniques to optimize system performance. It examines two alternate philosophies for system configuration and respective performance results. We also present performance metrics for system evaluation, experimental results, and applications to four-frame cameras

  18. Students' Framing of Laboratory Exercises Using Infrared Cameras

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  19. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    Science.gov (United States)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  20. Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains

    Energy Technology Data Exchange (ETDEWEB)

    Lumpkin, A. H. [Fermilab; Edstrom Jr., D. [Fermilab; Ruan, J. [Fermilab

    2016-10-09

    We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.

  1. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  2. X-ray streak and framing camera techniques

    International Nuclear Information System (INIS)

    Coleman, L.W.; Attwood, D.T.

    1975-01-01

    This paper reviews recent developments and applications of ultrafast diagnostic techniques for x-ray measurements. These techniques, based on applications of image converter devices, are already capable of significantly important resolution capabilities. Techniques capable of time resolution in the sub-nanosecond regime are being considered. Mechanical cameras are excluded from considerations as are devices using phosphors or fluors as x-ray converters

  3. IMAGE ACQUISITION CONSTRAINTS FOR PANORAMIC FRAME CAMERA IMAGING

    Directory of Open Access Journals (Sweden)

    H. Kauhanen

    2012-07-01

    Full Text Available The paper describes an approach to quantify the amount of projective error produced by an offset of projection centres in a panoramic imaging workflow. We have limited this research to such panoramic workflows in which several sub-images using planar image sensor are taken and then stitched together as a large panoramic image mosaic. The aim is to simulate how large the offset can be before it introduces significant error to the dataset. The method uses geometrical analysis to calculate the error in various cases. Constraints for shooting distance, focal length and the depth of the area of interest are taken into account. Considering these constraints, it is possible to safely use even poorly calibrated panoramic camera rig with noticeable offset in projection centre locations. The aim is to create datasets suited for photogrammetric reconstruction. Similar constraints can be used also for finding recommended areas from the image planes for automatic feature matching and thus improve stitching of sub-images into full panoramic mosaics. The results are mainly designed to be used with long focal length cameras where the offset of projection centre of sub-images can seem to be significant but on the other hand the shooting distance is also long. We show that in such situations the error introduced by the offset of the projection centres results only in negligible error when stitching a metric panorama. Even if the main use of the results is with cameras of long focal length, they are feasible for all focal lengths.

  4. High image quality sub 100 picosecond gated framing camera development

    International Nuclear Information System (INIS)

    Price, R.H.; Wiedwald, J.D.

    1983-01-01

    A major challenge for laser fusion is the study of the symmetry and hydrodynamic stability of imploding fuel capsules. Framed x-radiographs of 10-100 ps duration, excellent image quality, minimum geometrical distortion (< 1%), dynamic range greater than 1000, and more than 200 x 200 pixels are required for this application. Recent progress on a gated proximity focused intensifier which meets these requirements is presented

  5. X-ray framing cameras for > 5 keV imaging

    International Nuclear Information System (INIS)

    Landen, O.L.; Bell, P.M.; Costa, R.; Kalantar, D.H.; Bradley, D.K.

    1995-01-01

    Recent and proposed improvements in spatial resolution, temporal resolution, contrast, and detection efficiency for x-ray framing cameras are discussed in light of present and future laser-plasma diagnostic needs. In particular, improvements in image contrast above hard x-ray background levels is demonstrated by using high aspect ratio tapered pinholes

  6. POINT CLOUD DERIVED FROMVIDEO FRAMES: ACCURACY ASSESSMENT IN RELATION TO TERRESTRIAL LASER SCANNINGAND DIGITAL CAMERA DATA

    Directory of Open Access Journals (Sweden)

    P. Delis

    2017-02-01

    Full Text Available The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object and Sony NEX-VG10 E (for the historic building. In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  7. Development of a dual MCP framing camera for high energy x-rays

    Energy Technology Data Exchange (ETDEWEB)

    Izumi, N., E-mail: izumi2@llnl.gov; Hall, G. N.; Carpenter, A. C.; Allen, F. V.; Cruz, J. G.; Felker, B.; Hargrove, D.; Holder, J.; Lumbard, A.; Montesanti, R.; Palmer, N. E.; Piston, K.; Stone, G.; Thao, M.; Vern, R.; Zacharias, R.; Landen, O. L.; Tommasini, R.; Bradley, D. K.; Bell, P. M. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); and others

    2014-11-15

    Recently developed diagnostic techniques at LLNL require recording backlit images of extremely dense imploded plasmas using hard x-rays, and demand the detector to be sensitive to photons with energies higher than 50 keV [R. Tommasini et al., Phys. Phys. Plasmas 18, 056309 (2011); G. N. Hall et al., “AXIS: An instrument for imaging Compton radiographs using ARC on the NIF,” Rev. Sci. Instrum. (these proceedings)]. To increase the sensitivity in the high energy region, we propose to use a combination of two MCPs. The first MCP is operated in a low gain regime and works as a thick photocathode, and the second MCP works as a high gain electron multiplier. We tested the concept of this dual MCP configuration and succeeded in obtaining a detective quantum efficiency of 4.5% for 59 keV x-rays, 3 times larger than with a single plate of the thickness typically used in NIF framing cameras.

  8. Overview of the ARGOS X-ray framing camera for Laser MegaJoule

    Energy Technology Data Exchange (ETDEWEB)

    Trosseille, C., E-mail: clement.trosseille@cea.fr; Aubert, D.; Auger, L.; Bazzoli, S.; Brunel, P.; Burillo, M.; Chollet, C.; Jasmin, S.; Maruenda, P.; Moreau, I.; Oudot, G.; Raimbourg, J.; Soullié, G.; Stemmler, P.; Zuber, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Beck, T. [CEA, DEN, CADARACHE, F-13108 St Paul lez Durance (France); Gazave, J. [CEA, DAM, CESTA, F-33116 Le Barp (France)

    2014-11-15

    Commissariat à l’Énergie Atomique et aux Énergies Alternatives has developed the ARGOS X-ray framing camera to perform two-dimensional, high-timing resolution imaging of an imploding target on the French high-power laser facility Laser MegaJoule. The main features of this camera are: a microchannel plate gated X-ray detector, a spring-loaded CCD camera that maintains proximity focus in any orientation, and electronics packages that provide remotely-selectable high-voltages to modify the exposure-time of the camera. These components are integrated into an “air-box” that protects them from the harsh environmental conditions. A miniaturized X-ray generator is also part of the device for in situ self-testing purposes.

  9. High-speed two-frame gated camera for parameters measurement of Dragon-Ⅰ LIA

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Wang Yuan; Zhang Kaizhi; Shi Jinshui; Deng Jianjun; Li Jin

    2012-01-01

    The time-resolved measurement system which can work at very high speed is necessary in electron beam parameter diagnosis for Dragon-Ⅰ linear induction accelerator (LIA). A two-frame gated camera system has been developed and put into operation. The camera system adopts the optical principle of splitting the imaging light beam into two parts in the imaging space of a lens with long focus length. It includes lens coupled gated image intensifier, CCD camera, high speed shutter trigger device based on large scale field programmable gate array. The minimum exposure time for each image is about 3 ns, and the interval time between two images can be adjusted with a step of about 0.5 ns. The exposure time and the interval time can be independently adjusted and can reach about 1 s. The camera system features good linearity, good response uniformity, equivalent background illumination (EBI) as low as about 5 electrons per pixel per second, large adjustment range of sensitivity, and excel- lent flexibility and adaptability in applications. The camera system can capture two frame images at one time with the image size of 1024 x 1024. It meets the requirements of measurement for Dragon-Ⅰ LIA. (authors)

  10. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  11. Distant Measurement of Plethysmographic Signal in Various Lighting Conditions Using Configurable Frame-Rate Camera

    Directory of Open Access Journals (Sweden)

    Przybyło Jaromir

    2016-12-01

    Full Text Available Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm for fluorescent light to 6.6 bpm for dim daylight.

  12. Six-frame picosecond radiation camera based on hydrated electron photoabsorption phenomena

    International Nuclear Information System (INIS)

    Coutts, G.W.; Olk, L.B.; Gates, H.A.; St Leger-Barter, G.

    1977-01-01

    To obtain picosecond photographs of nanosecond radiation sources, a six-frame ultra-high speed radiation camera based on hydrated electron absorption phenomena has been developed. A time-dependent opacity pattern is formed in an acidic aqueous cell by a pulsed radiation source. Six time-resolved picosecond images of this changing opacity pattern are transferred to photographic film with the use of a mode-locked dye laser and six electronically gated microchannel plate image intensifiers. Because the lifetime of the hydrated electron absorption centers can be reduced to picoseconds, the opacity patterns represent time-space pulse profile images

  13. Synchronization of streak and framing camera measurements of an intense relativistic electron beam propagating through gas

    International Nuclear Information System (INIS)

    Weidman, D.J.; Murphy, D.P.; Myers, M.C.; Meger, R.A.

    1994-01-01

    The expansion of the radius of a 5 MeV, 20 kA, 40 ns electron beam from SuperIBEX during propagation through gas is being measured. The beam is generated, conditions, equilibrated, and then passed through a thin foil that produces Cherenkov light, which is recorded by a streak camera. At a second location, the beam hits another Cherenkov emitter, which is viewed by a framing camera. Measurements at these two locations can provide a time-resolved measure of the beam expansion. The two measurements, however, must be synchronized with each other, because the beam radius is not constant throughout the pulse due to variations in beam current and energy. To correlate the timing of the two diagnostics, several shots have been taken with both diagnostics viewing Cherenkov light from the same foil. Experimental measurements of the Cherenkov light from one foil viewed by both diagnostics will be presented to demonstrate the feasibility of correlating the diagnostics with each other. Streak camera data showing the optical fiducial, as well as the final correlation of the two diagnostics, will also be presented. Preliminary beam radius measurements from Cherenkov light measured at two locations will be shown

  14. High-resolution Ceres Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    Science.gov (United States)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2017-06-01

    The Dawn spacecraft Framing Camera (FC) acquired over 31,300 clear filter images of Ceres with a resolution of about 35 m/pxl during the eleven cycles in the Low Altitude Mapping Orbit (LAMO) phase between December 16 2015 and August 8 2016. We ortho-rectified the images from the first four cycles and produced a global, high-resolution, uncontrolled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 62 tiles mapped at a scale of 1:250,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page [http://dawngis.dlr.de/atlas] and will become available through the NASA Planetary Data System (PDS) (http://pdssbn.astro.umd.edu/).

  15. Modeling of neutron induced backgrounds in x-ray framing cameras

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C.; Izumi, N.; Bell, P.; Bradley, D.; Conder, A.; Eckart, M.; Khater, H.; Koch, J.; Moody, J.; Stone, G. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2010-10-15

    Fast neutrons from inertial confinement fusion implosions pose a severe background to conventional multichannel plate (MCP)-based x-ray framing cameras for deuterium-tritium yields >10{sup 13}. Nuclear reactions of neutrons in photosensitive elements (charge coupled device or film) cause some of the image noise. In addition, inelastic neutron collisions in the detector and nearby components create a large gamma pulse. The background from the resulting secondary charged particles is twofold: (1) production of light through the Cherenkov effect in optical components and by excitation of the MCP phosphor and (2) direct excitation of the photosensitive elements. We give theoretical estimates of the various contributions to the overall noise and present mitigation strategies for operating in high yield environments.

  16. Accurate current synchronization trigger mode for multi-framing gated camera on YANG accelerator

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Huang Xianbin; Li Chenggang; Yang Libing; Wang Yuan; Zhang Kaizhi; Ye Yi

    2007-01-01

    The current synchronization trigger mode is important for Z-pinch experiments carried out on the YANG accelerator. The technology can solve the problem of low synchronization precision. The inherent delay time between the load current waveform and the experimental phenomenon can be adopted to obtain the synchronization trigger time. The correlative time precision about ns level can be achieved in this way. The photoelectric isolator and optical fiber are used in the synchronization trigger system to eliminate the electro-magnetic interference and many accurate measurements on the YANG accelerator can be realized. The application of this trigger mode to the multi-framing gated camera synchronization trigger system has done the trick. The evolution course of Z-pinch imploding plasma has been recorded with 3 ns exposure time and 10 ns interframing time. (authors)

  17. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    Science.gov (United States)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  18. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  19. Location of frame overlap choppers on pulsed source instruments

    International Nuclear Information System (INIS)

    Narehood, D.G.; Pearce, J.V.; Sokol, P.E.

    2002-01-01

    A detailed study has been performed to investigate the effect of frame overlap in a cold neutron chopper spectrometer. The basic spectrometer is defined by two high-speed choppers, one near the moderator to shape the pulse from the moderator, and one near the sample to define energy resolution. Using ray-tracing timing diagrams, we have observed that there are regions along the guide where the trajectories of neutrons with different velocities converge temporally at characteristic points along the spectrometer. At these points of convergence, a frame overlap chopper would be totally ineffective, allowing neutrons of all velocities to pass through. Conversely, at points where trajectories of different velocity neutrons are divergent, a frame overlap chopper is most effective. An analytical model to describe this behaviour has been developed, and leads us to the counterintuitive conclusion that the optimum position for a frame overlap chopper is as close to the initial chopper as possible. We further demonstrate that detailed Monte Carlo simulations produce results which are consistent with this model

  20. Colors and Photometry of Bright Materials on Vesta as Seen by the Dawn Framing Camera

    Science.gov (United States)

    Schroeder, S. E.; Li, J.-Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.; hide

    2012-01-01

    The Dawn spacecraft has been in orbit around the asteroid Vesta since July, 2011. The on-board Framing Camera has acquired thousands of high-resolution images of the regolith-covered surface through one clear and seven narrow-band filters in the visible and near-IR wavelength range. It has observed bright and dark materials that have a range of reflectance that is unusually wide for an asteroid. Material brighter than average is predominantly found on crater walls, and in ejecta surrounding caters in the southern hemisphere. Most likely, the brightest material identified on the Vesta surface so far is located on the inside of a crater at 64.27deg S, 1.54deg . The apparent brightness of a regolith is influenced by factors such as particle size, mineralogical composition, and viewing geometry. As such, the presence of bright material can indicate differences in lithology and/or degree of space weathering. We retrieve the spectral and photometric properties of various bright terrains from false-color images acquired in the High Altitude Mapping Orbit (HAMO). We find that most bright material has a deeper 1-m pyroxene band than average. However, the aforementioned brightest material appears to have a 1-m band that is actually less deep, a result that awaits confirmation by the on-board VIR spectrometer. This site may harbor a class of material unique for Vesta. We discuss the implications of our spectral findings for the origin of bright materials.

  1. Gain uniformity, linearity, saturation and depletion in gated microchannel-plate x-ray framing cameras

    International Nuclear Information System (INIS)

    Landen, O.L.; Bell, P.M.; Satariano, J.J.; Oertel, J.A.; Bradley, D.K.

    1994-01-01

    The pulsed characteristics of gated, stripline configuration microchannel-plate (MCP) detectors used in X-ray framing cameras deployed on laser plasma experiments worldwide are examined in greater detail. The detectors are calibrated using short (20 ps) and long (500 ps) pulse X-ray irradiation and 3--60 ps, deep UV (202 and 213 nm), spatially-smoothed laser irradiation. Two-dimensional unsaturated gain profiles show 5 in irradiation and fitted using a discrete dynode model. Finally, a pump-probe experiment quantifying for the first time long-suspected gain depletion by strong localized irradiation was performed. The mechanism for the extra voltage and hence gain degradation is shown to be associated with intense MCP irradiation in the presence of the voltage pulse, at a fluence at least an order of magnitude above that necessary for saturation. Results obtained for both constant pump area and constant pump fluence are presented. The data are well modeled by calculating the instantaneous electrical energy loss due to the intense charge extraction at the pump site and then recalculating the gain downstream at the probe site given the pump-dependent degradation in voltage amplitude

  2. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    Science.gov (United States)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; hide

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  3. 2D turbulence structure observed by a fast framing camera system in linear magnetized device PANTA

    International Nuclear Information System (INIS)

    Ohdachi, Satoshi; Inagaki, S.; Kobayashi, T.; Goto, M.

    2015-01-01

    Mesoscale structure, such as the zonal flow and the streamer plays important role in the drift-wave turbulence. The interaction of the mesoscale structure and the turbulence is not only interesting phenomena but also a key to understand the turbulence driven transport in the magnetically confined plasmas. In the cylindrical magnetized device, PANTA, the interaction of the streamer and the drift wave has been found by the bi-spectrum analysis of the turbulence. In order to study the mesoscale physics directly, the 2D turbulence is studied by a fast-framing visible camera system view from a window located at the end plate of the device. The parameters of the plasma is the following; Te∼3eV, n ∼ 1x10 19 m -3 , Ti∼0.3eV, B=900G, Neutral pressure P n =0.8 mTorr, a∼ 6cm, L=4m, Helicon source (7MHz, 3kW). Fluctuating component of the visible image is decomposed by the Fourier-Bessel expansion method. Several rotating mode is observed simultaneously. From the images, m = 1 (f∼0.7 kHz) and m = 2, 3 (f∼-3.4 kHz) components which rotate in the opposite direction can be easily distinguished. Though the modes rotate constantly in most time, there appear periods where the radially complicated node structure is formed (for example, m=3 component, t = 142.5∼6 in the figure) and coherent mode structures are disturbed. Then, a new rotating period is started again with different phase of the initial rotation until the next event happens. The typical time interval of the event is 0.5 to 1.0 times of the one rotation of the slow m = 1 mode. The wave-wave interaction might be interrupted occasionally. Detailed analysis of the turbulence using imaging technique will be discussed. (author)

  4. Resolved spectrophotometric properties of the Ceres surface from Dawn Framing Camera images

    Science.gov (United States)

    Schröder, S. E.; Mottola, S.; Carsenty, U.; Ciarniello, M.; Jaumann, R.; Li, J.-Y.; Longobardo, A.; Palmer, E.; Pieters, C.; Preusker, F.; Raymond, C. A.; Russell, C. T.

    2017-05-01

    We present a global spectrophotometric characterization of the Ceres surface using Dawn Framing Camera (FC) images. We identify the photometric model that yields the best results for photometrically correcting images. Corrected FC images acquired on approach to Ceres were assembled into global maps of albedo and color. Generally, albedo and color variations on Ceres are muted. The albedo map is dominated by a large, circular feature in Vendimia Planitia, known from HST images (Li et al., 2006), and dotted by smaller bright features mostly associated with fresh-looking craters. The dominant color variation over the surface is represented by the presence of "blue" material in and around such craters, which has a negative spectral slope over the visible wavelength range when compared to average terrain. We also mapped variations of the phase curve by employing an exponential photometric model, a technique previously applied to asteroid Vesta (Schröder et al., 2013b). The surface of Ceres scatters light differently from Vesta in the sense that the ejecta of several fresh-looking craters may be physically smooth rather than rough. High albedo, blue color, and physical smoothness all appear to be indicators of youth. The blue color may result from the desiccation of ejected material that is similar to the phyllosilicates/water ice mixtures in the experiments of Poch et al. (2016). The physical smoothness of some blue terrains would be consistent with an initially liquid condition, perhaps as a consequence of impact melting of subsurface water ice. We find red terrain (positive spectral slope) near Ernutet crater, where De Sanctis et al. (2017) detected organic material. The spectrophotometric properties of the large Vendimia Planitia feature suggest it is a palimpsest, consistent with the Marchi et al. (2016) impact basin hypothesis. The central bright area in Occator crater, Cerealia Facula, is the brightest on Ceres with an average visual normal albedo of about 0.6 at

  5. Development of a visible framing camera diagnostic for the study of current initiation in z-pinch plasmas

    International Nuclear Information System (INIS)

    Muron, D.J.; Hurst, M.J.; Derzon, M.S.

    1996-01-01

    The authors assembled and tested a visible framing camera system to take 5 ns FWHM images of the early time emission from a z-pinch plasma. This diagnostic was used in conjunction with a visible streak camera allowing early time emissions measurements to diagnose current initiation. Individual frames from gated image intensifiers were proximity coupled to charge injection device (CID) cameras and read out at video rate and 8-bit resolution. A mirror was used to view the pinch from a 90-degree angle. The authors observed the destruction of the mirror surface, due to the high surface heating, and the subsequent reduction in signal reflected from the mirror. Images were obtained that showed early time ejecta and a nonuniform emission from the target. This initial test of the equipment highlighted problems with this measurement. They observed non-uniformities in early time emission. This is believed to be due to either spatially varying current density or heating of the foam. Images were obtained that showed early time ejecta from the target. The results and suggestions for improvement are discussed in the text

  6. Flight Test Results From the Ultra High Resolution, Electro-Optical Framing Camera Containing a 9216 by 9216 Pixel, Wafer Scale, Focal Plane Array

    National Research Council Canada - National Science Library

    Mathews, Bruce; Zwicker, Theodore

    1999-01-01

    The details of the fabrication and results of laboratory testing of the Ultra High Resolution Framing Camera containing onchip forward image motion compensation were presented to the SPIE at Airborne...

  7. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging.

    Science.gov (United States)

    Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P

    2016-02-01

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.

  8. A study of fish behaviour in the extension of a demersal trawl using a multi-compartment separator frame and SIT camera system

    DEFF Research Database (Denmark)

    Krag, Ludvig Ahm; Madsen, Niels; Karlsen, Junita

    2009-01-01

    A rigid separator frame with three vertically stacked codends was used to study fish behaviour in the extension piece of a demersal trawl. A video camera recorded fish as they encountered the separator frame. Ten hauls were conducted in a mixed species fishery in the northern North Sea. Fish...

  9. The Infrared Camera for RATIR, a Rapid Response GRB Followup Instrument

    Science.gov (United States)

    Rapchun, David A.; Alardin, W.; Bigelow, B. C.; Bloom, J.; Butler, N.; Farah, A.; Fox, O. D.; Gehrels, N.; Gonzalez, J.; Klein, C.; Kutyrev, A. S.; Lotkin, G.; Morisset, C.; Moseley, S. H.; Richer, M.; Robinson, F. D.; Samuel, M. V.; Sparr, L. M.; Tucker, C.; Watson, A.

    2011-01-01

    RATIR (Reionization and Transients Infrared instrument) will be a hybrid optical/near IR imager that will utilize the "J-band dropout" to rapidly identify very high redshift (VHR) gamma-ray bursts (GRBs) from a sample of all observable Swift bursts. Our group at GSFC is developing the instrument in collaboration with UC Berkeley (UCB) and University of Mexico (UNAM). RATIR has both a visible and IR camera, which give it access to 8 bands spanning visible and IR wavelengths. The instrument implements a combination of filters and dichroics to provide the capability of performing photometry in 4 bands simultaneously. The GSFC group leads the design and construction of the instrument's IR camera, equipped with two HgCdTe 2k x 2k Teledyne detectors. The cryostat housing these detectors is cooled by a mechanical cryo-compressor, which allows uninterrupted operation on the telescope. The host 1.5-m telescope, located at the UNAM San Pedro Martir Observatory, Mexico, has recently undergone robotization, allowing for fully automated, continuous operation. After commissioning in the spring of 2011, RATIR will dedicate its time to obtaining prompt follow-up observations of GRBs and identifying VHR GRBs, thereby providing a valuable tool for studying the epoch of reionization.

  10. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  11. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  12. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Tian, Jinshou; Liu, Zhen; Fang, Yuman; Gao, Guilong; Liang, Lingliang; Wen, Wenlong

    2015-01-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility

  13. The test beamline of the European Spallation Source - Instrumentation development and wavelength frame multiplication

    DEFF Research Database (Denmark)

    Woracek, R.; Hofmann, T.; Bulat, M.

    2016-01-01

    which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor...... wavelength band between 1.6 A and 10 A by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components....... This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects....

  14. Detailed measurements and shaping of gate profiles for microchannel-plate-based X-ray framing cameras

    International Nuclear Information System (INIS)

    Landen, O.L.; Hammel, B.A.; Bell, P.M.; Abare, A.; Bradley, D.K.; Univ. of Rochester, NY

    1994-01-01

    Gated, microchannel-plate-based (MCP) framing cameras are increasingly used worldwide for x-ray imaging of subnanosecond laser-plasma phenomena. Large dynamic range (> 1,000) measurements of gain profiles for gated microchannel plates (MCP) are presented. Temporal profiles are reconstructed for any point on the microstrip transmission line from data acquired over many shots with variable delay. No evidence for significant pulse distortion by voltage reflections at the ends of the microstrip is observed. The measured profiles compare well to predictions by a time-dependent discrete dynode model down to the 1% level. The calculations do overestimate the contrast further into the temporal wings. The role of electron transit time dispersion in limiting the minimum achievable gate duration is then investigated by using variable duration flattop gating pulses. A minimum gate duration of 50 ps is achieved with flattop gating, consistent with a fractional transit time spread of ∼ 15%

  15. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    International Nuclear Information System (INIS)

    Woracek, R.; Hofmann, T.; Bulat, M.; Sales, M.; Habicht, K.; Andersen, K.; Strobl, M.

    2016-01-01

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  16. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Woracek, R., E-mail: robin.woracek@esss.se [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Hofmann, T.; Bulat, M. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Sales, M. [Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark); Habicht, K. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Andersen, K. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Strobl, M. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark)

    2016-12-11

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  17. Implementation of 40-ps high-speed gated-microchannel-plate based x-ray framing cameras on reentrant SIM's for Nova

    International Nuclear Information System (INIS)

    Bell, P.M.; Kilkenny, J.D.; Landen, O.; Bradley, D.K.

    1994-01-01

    Gated framing cameras used in diagnosing laser produced plasmas have been used on the Nova laser system since 1987. There have been many variations of these systems implemented. All of these cameras have been ultimately limited in response time for two reasons. One being the electrical gating amplitude verses the gate width, this has always limited the detectable gain in the system. The second being the length to diameter (l/d) ratio of standard off the shelf microchannel plates (MCP). This sets the minimum electrical gate pulse that will give detectable gain from a given microchannel plate. The authors have implemented two different types of 40 ps framing camera configurations on the Nova laser system. They will describe the configurations of both systems as well as discuss the advantages of each

  18. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  19. Measurements of plasma termination in ICRF heated long pulse discharges with fast framing cameras in the Large Helical Device

    International Nuclear Information System (INIS)

    Shoji, Mamoru; Kasahara, Hiroshi; Tanaka, Hirohiko

    2015-01-01

    The termination process of long pulse plasma discharges in the Large Helical Device (LHD) have been observed with fast framing cameras, which shows that the reason for the termination of the discharged has been changed with increased plasma heating power, improvements of plasma heating systems and change of the divertor configuration, etc. For long pulse discharges in FYs2010-2012, the main reason triggering the plasma termination was reduction of ICRF heating power with rise of iron ion emission due to electric breakdown in an ICRF antenna. In the experimental campaign in FY2013, the duration time of ICRF heated long pulse plasma discharges has been extended to about 48 minutes with a plasma heating power of ∼1.2 MW and a line-averaged electron density of ∼1.2 × 10"1"9 m"-"3. The termination of the discharges was triggered by release of large amounts of carbon dusts from closed divertor regions, indicating that the control of dust formation in the divertor regions is indispensable for extending the duration time of long pulse discharges. (author)

  20. Seismic response and damage detection analyses of an instrumented steel moment-framed building

    Science.gov (United States)

    Rodgers, J.E.; Celebi, M.

    2006-01-01

    The seismic performance of steel moment-framed buildings has been of particular interest since brittle fractures were discovered at the beam-column connections in a number of buildings following the M 6.7 Northridge earthquake of January 17, 1994. A case study of the seismic behavior of an extensively instrumented 13-story steel moment frame building located in the greater Los Angeles area of California is described herein. Response studies using frequency domain, joint time-frequency, system identification, and simple damage detection analyses are performed using an extensive strong motion dataset dating from 1971 to the present, supported by engineering drawings and results of postearthquake inspections. These studies show that the building's response is more complex than would be expected from its highly symmetrical geometry. The response is characterized by low damping in the fundamental mode, larger accelerations in the middle and lower stories than at the roof and base, extended periods of vibration after the cessation of strong input shaking, beating in the response, elliptical particle motion, and significant torsion during strong shaking at the top of the concrete piers which extend from the basement to the second floor. The analyses conducted indicate that the response of the structure was elastic in all recorded earthquakes to date, including Northridge. Also, several simple damage detection methods employed did not indicate any structural damage or connection fractures. The combination of a large, real structure and low instrumentation density precluded the application of many recently proposed advanced damage detection methods in this case study. Overall, however, the findings of this study are consistent with the limited code-compliant postearthquake intrusive inspections conducted after the Northridge earthquake, which found no connection fractures or other structural damage. ?? ASCE.

  1. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    Science.gov (United States)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  2. Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera

    Science.gov (United States)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.

  3. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  4. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  5. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Directory of Open Access Journals (Sweden)

    Gustavo R D Bernardina

    Full Text Available Action sport cameras (ASC are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720 and 1.5mm (1920×1080. The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  6. Temporal resolution technology of a soft X-ray picosecond framing camera based on Chevron micro-channel plates gated in cascade

    Energy Technology Data Exchange (ETDEWEB)

    Yang Wenzheng [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)], E-mail: ywz@opt.ac.cn; Bai Yonglin; Liu Baiyu [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Bai Xiaohong; Zhao Junping; Qin Junjun [Key Laboratory of Ultra-fast Photoelectric Diagnostics Technology, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)

    2009-09-11

    We describe a soft X-ray picosecond framing camera (XFC) based on Chevron micro-channel plates (MCPs) gated in cascade for ultra-fast process diagnostics. The micro-strip lines are deposited on both the input and the output surfaces of the Chevron MCPs and can be gated by a negative (positive) electric pulse on the first (second) MCP. The gating is controlled by the time delay T{sub d} between two gating pulses. By increasing T{sub d}, the temporal resolution and the gain of the camera are greatly improved compared with a single-gated MCP-XFC. The optimal T{sub d}, which results in the best temporal resolution, is within the electron transit time and transit time spread of the MCP. Using 250 ps, {+-}2.5 kV gating pulses, the temporal resolution of the double-gated Chevron MCPs camera is improved from 60 ps for the single-gated MCP-XFC to 37 ps for T{sub d}=350 ps. The principle is presented in detail and accompanied with a theoretic simulation and experimental results.

  7. The Television Framing Methods of the National Basketball Association: An Agenda-Setting Application.

    Science.gov (United States)

    Fortunato, John A.

    2001-01-01

    Identifies and analyzes the exposure and portrayal framing methods that are utilized by the National Basketball Association (NBA). Notes that key informant interviews provide insight into the exposure framing method and reveal two portrayal instruments: cameras and announcers; and three framing strategies: depicting the NBA as a team game,…

  8. A fast framing camera system for observation of acceleration and ablation of cryogenic hydrogen pellet in ASDEX Upgrade plasmas

    International Nuclear Information System (INIS)

    Kocsis, G.; Kalvin, S.; Veres, G.; Cierpka, P.; Lang, P.T.; Neuhauser, J.; Wittman, C.; ASDEX Upgrade Team

    2004-01-01

    An observation system using fast digital cameras was developed to measure a cryogenic hydrogen pellet's cloud structure, trajectory, and velocity changes during its ablation in ASDEX Upgrade plasmas. In this article the system, the applied numerical methods, and the results are presented. The three-dimensional pellet trajectory and velocity components were reconstructed from images of observations from two different directions. Pellet acceleration both in the radial and toroidal directions was detected. The pellet cloud distribution was measured with high spatio-temporal resolution. The cloud surrounding the pellet was found to be elongated along the magnetic field lines. Its typical size is 5-7 cm along the field lines and 2 cm in the perpendicular directions. A cloud extension in the poloidal direction was also observed which may be related to the drift of the detached part of the cloud

  9. Analysis of the three-dimensional trajectories of dusts observed with a stereoscopic fast framing camera in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Shoji, M., E-mail: shoji@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Masuzaki, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Tanaka, Y. [Kanazawa University, Kakuma, Kanazawa 920-1192 (Japan); Pigarov, A.Yu.; Smirnov, R.D. [University of California at San Diego, La Jolla, CA 92093 (United States); Kawamura, G.; Uesugi, Y.; Yamada, H. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan)

    2015-08-15

    The three-dimensional trajectories of dusts have been observed with two stereoscopic fast framing cameras installed in upper and outer viewports in the Large Helical Device (LHD). It shows that the dust trajectories locate in divertor legs and an ergodic layer around the main plasma confinement region. While it is found that most of the dusts approximately move along the magnetic field lines with acceleration, there are some dusts which have sharply curved trajectories crossing over the magnetic field lines. A dust transport simulation code was modified to investigate the dust trajectories in fully three dimensional geometries such as LHD plasmas. It can explain the general trend of most of observed dust trajectories by the effect of the plasma flow in the peripheral plasma. However, the behavior of the some dusts with sharply curved trajectories is not consistent with the simulations.

  10. Telescope and mirrors development for the monolithic silicon carbide instrument of the osiris narrow angle camera

    Science.gov (United States)

    Calvel, Bertrand; Castel, Didier; Standarovski, Eric; Rousset, Gérard; Bougoin, Michel

    2017-11-01

    The international Rosetta mission, now planned by ESA to be launched in January 2003, will provide a unique opportunity to directly study the nucleus of comet 46P/Wirtanen and its activity in 2013. We describe here the design, the development and the performances of the telescope of the Narrow Angle Camera of the OSIRIS experiment et its Silicon Carbide telescope which will give high resolution images of the cometary nucleus in the visible spectrum. The development of the mirrors has been specifically detailed. The SiC parts have been manufactured by BOOSTEC, polished by STIGMA OPTIQUE and ion figured by IOM under the prime contractorship of ASTRIUM. ASTRIUM was also in charge of the alignment. The final optical quality of the aligned telescope is 30 nm rms wavefront error.

  11. Management of the camera electronics programme for the World Space Observatory ultraviolet WUVS instrument

    Science.gov (United States)

    Patel, Gayatri; Clapp, Matthew; Salter, Mike; Waltham, Nick; Beardsley, Sarah

    2016-08-01

    World Space Observatory Ultraviolet (WSO-UV) is a major international collaboration led by Russia and will study the universe at ultraviolet wavelengths between 115 nm and 320 nm. The WSO Ultraviolet Spectrograph (WUVS) subsystem is led by a consortium of Russian institutes and consists of three spectrographs. RAL Space is contracted by e2v technologies Ltd to provide the CCD readout electronics for each of the three WUVS channels. The programme involves the design, manufacturing, assembly and testing of each Camera Electronics Box (CEB), its associated Interconnection Module (ICM), Electrical Ground Support Equipment (EGSE) and harness. An overview of the programme will be presented, from the initial design phase culminating in the development of an Engineering Model (EM) through qualification whereby an Engineering Qualification Model (EQM) will undergo environmental testing to characterize the performance of the CEB against the space environment, to the delivery of the Flight Models (FMs). The paper will discuss the challenges faced managing a large, dynamic project. This includes managing significant changes in fundamental requirements mid-programme as a result of external political issues which forced a complete re-design of an existing CEB with extensive space heritage but containing many ITAR controlled electronic components to a new, more efficient solution, free of ITAR controlled parts. The methodology and processes used to ensure the demanding schedule is maintained through each stage of the project will be presented including an insight into planning, decision-making, communication, risk management, and resource management; all essential to the continued success of the programme.

  12. Frame, methods and instruments for energy planning in the new economic order of electricity economics

    International Nuclear Information System (INIS)

    Stigler, H.

    1999-01-01

    The introduction of the new economic order of the electricity economy causes new focal tasks for the individual market participants and therefore new requirements for planning. As a precondition for energy planning, the Internal Market Electricity Directive and the ElWOG are examined and the tasks for the market participants are derived. Liberalization raises the risks for the enterprises. Increasing competition sets up higher requirements for planning. The planning instruments have no longer the destination of minimum costs but have to maximize the results of the enterprise. Price fixing requires a raised alignment to marginal costs considerations. Increasing electricity trade requires the introduction of new planning instruments. Further new tasks refer to electricity transfer via other networks and especially to congestion management. New chances but also new risks arise for the renewable energy sources. From the market result new requirements for the planning instruments. The basics in this respect are prepared and concrete examples from practice are submitted. Models of enterprises are developed, which consist of a technical and a business part. Central importance has the modeling of competition in the liberalized market. A model of competition between enterprises in the electricity market is developed. (author)

  13. Onboard calibration igneous targets for the Mars Science Laboratory Curiosity rover and the Chemistry Camera laser induced breakdown spectroscopy instrument

    Energy Technology Data Exchange (ETDEWEB)

    Fabre, C., E-mail: cecile.fabre@g2r.uhp-nancy.fr [G2R, Nancy Universite (France); Maurice, S.; Cousin, A. [IRAP, Toulouse (France); Wiens, R.C. [LANL, Los Alamos, NM (United States); Forni, O. [IRAP, Toulouse (France); Sautter, V. [MNHN, Paris (France); Guillaume, D. [GET, Toulouse (France)

    2011-03-15

    Accurate characterization of the Chemistry Camera (ChemCam) laser-induced breakdown spectroscopy (LIBS) on-board composition targets is of prime importance for the ChemCam instrument. The Mars Science Laboratory (MSL) science and operations teams expect ChemCam to provide the first compositional results at remote distances (1.5-7 m) during the in situ analyses of the Martian surface starting in 2012. Thus, establishing LIBS reference spectra from appropriate calibration standards must be undertaken diligently. Considering the global mineralogy of the Martian surface, and the possible landing sites, three specific compositions of igneous targets have been determined. Picritic, noritic, and shergottic glasses have been produced, along with a Macusanite natural glass. A sample of each target will fly on the MSL Curiosity rover deck, 1.56 m from the ChemCam instrument, and duplicates are available on the ground. Duplicates are considered to be identical, as the relative standard deviation (RSD) of the composition dispersion is around 8%. Electronic microprobe and laser ablation inductively coupled plasma mass spectrometry (LA ICP-MS) analyses give evidence that the chemical composition of the four silicate targets is very homogeneous at microscopic scales larger than the instrument spot size, with RSD < 5% for concentration variations > 0.1 wt.% using electronic microprobe, and < 10% for concentration variations > 0.01 wt.% using LA ICP-MS. The LIBS campaign on the igneous targets performed under flight-like Mars conditions establishes reference spectra for the entire mission. The LIBS spectra between 240 and 900 nm are extremely rich, hundreds of lines with high signal-to-noise, and a dynamical range sufficient to identify unambiguously major, minor and trace elements. For instance, a first LIBS calibration curve has been established for strontium from [Sr] = 284 ppm to [Sr] = 1480 ppm, showing the potential for the future calibrations for other major or minor

  14. Onboard calibration igneous targets for the Mars Science Laboratory Curiosity rover and the Chemistry Camera laser induced breakdown spectroscopy instrument

    International Nuclear Information System (INIS)

    Fabre, C.; Maurice, S.; Cousin, A.; Wiens, R.C.; Forni, O.; Sautter, V.; Guillaume, D.

    2011-01-01

    Accurate characterization of the Chemistry Camera (ChemCam) laser-induced breakdown spectroscopy (LIBS) on-board composition targets is of prime importance for the ChemCam instrument. The Mars Science Laboratory (MSL) science and operations teams expect ChemCam to provide the first compositional results at remote distances (1.5-7 m) during the in situ analyses of the Martian surface starting in 2012. Thus, establishing LIBS reference spectra from appropriate calibration standards must be undertaken diligently. Considering the global mineralogy of the Martian surface, and the possible landing sites, three specific compositions of igneous targets have been determined. Picritic, noritic, and shergottic glasses have been produced, along with a Macusanite natural glass. A sample of each target will fly on the MSL Curiosity rover deck, 1.56 m from the ChemCam instrument, and duplicates are available on the ground. Duplicates are considered to be identical, as the relative standard deviation (RSD) of the composition dispersion is around 8%. Electronic microprobe and laser ablation inductively coupled plasma mass spectrometry (LA ICP-MS) analyses give evidence that the chemical composition of the four silicate targets is very homogeneous at microscopic scales larger than the instrument spot size, with RSD 0.1 wt.% using electronic microprobe, and 0.01 wt.% using LA ICP-MS. The LIBS campaign on the igneous targets performed under flight-like Mars conditions establishes reference spectra for the entire mission. The LIBS spectra between 240 and 900 nm are extremely rich, hundreds of lines with high signal-to-noise, and a dynamical range sufficient to identify unambiguously major, minor and trace elements. For instance, a first LIBS calibration curve has been established for strontium from [Sr] = 284 ppm to [Sr] = 1480 ppm, showing the potential for the future calibrations for other major or minor elements.

  15. Space telescope phase B definition study. Volume 2A: Science instruments, f48/96 planetary camera

    Science.gov (United States)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and preliminary design of the f48/96 planetary camera for the space telescope are discussed. The camera design is for application to the axial module position of the optical telescope assembly.

  16. Systems approach to the design of the CCD sensors and camera electronics for the AIA and HMI instruments on solar dynamics observatory

    Science.gov (United States)

    Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.

    2017-11-01

    Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.

  17. Imacon 600 ultrafast streak camera evaluation

    International Nuclear Information System (INIS)

    Owen, T.C.; Coleman, L.W.

    1975-01-01

    The Imacon 600 has a number of designed in disadvantages for use as an ultrafast diagnostic instrument. The unit is physically large (approximately 5' long) and uses an external power supply rack for the image intensifier. Water cooling is required for the intensifier; it is quiet but not conducive to portability. There is no interlock on the cooling water. The camera does have several switch selectable sweep speeds. This is desirable if one is working with both slow and fast events. The camera can be run in a framing mode. (MOW)

  18. The wavelength frame multiplication chopper system for the ESS test beamline at the BER II reactor—A concept study of a fundamental ESS instrument principle

    International Nuclear Information System (INIS)

    Strobl, M.; Bulat, M.; Habicht, K.

    2013-01-01

    Contributing to the design update phase of the European Spallation Source ESS–scheduled to start operation in 2019–a test beamline is under construction at the BER II research reactor at Helmholtz Zentrum Berlin (HZB). This beamline offers experimental test capabilities of instrument concepts viable for the ESS. The experiments envisaged at this dedicated beamline comprise testing of components as well as of novel experimental approaches and methods taking advantage of the long pulse characteristic of the ESS source. Therefore the test beamline will be equipped with a sophisticated chopper system that provides the specific time structure of the ESS and enables variable wavelength resolutions via wavelength frame multiplication (WFM), a fundamental instrument concept beneficial for a number of instruments at ESS. We describe the unique chopper system developed for these purposes, which allows constant wavelength resolution for a wide wavelength band. Furthermore we discuss the implications for the conceptual design for related instrumentation at the ESS

  19. Framing the frame

    OpenAIRE

    Todd McElroy; John J. Seta

    2007-01-01

    We examined how the goal of a decision task influences the perceived positive, negative valence of the alternatives and thereby the likelihood and direction of framing effects. In Study 1 we manipulated the goal to increase, decrease or maintain the commodity in question and found that when the goal of the task was to increase the commodity, a framing effect consistent with those typically observed in the literature was found. When the goal was to decrease, a framing effect opposite to the ty...

  20. Instrumentation

    International Nuclear Information System (INIS)

    Prieur, G.; Nadi, M.; Hedjiedj, A.; Weber, S.

    1995-01-01

    This second chapter on instrumentation gives little general consideration on history and classification of instrumentation, and two specific states of the art. The first one concerns NMR (block diagram of instrumentation chain with details on the magnets, gradients, probes, reception unit). The first one concerns precision instrumentation (optical fiber gyro-meter and scanning electron microscope), and its data processing tools (programmability, VXI standard and its history). The chapter ends with future trends on smart sensors and Field Emission Displays. (D.L.). Refs., figs

  1. Determining the timeline of ultra-high speed images of the Cordin 550-32 camera

    CSIR Research Space (South Africa)

    Olivier, M

    2014-09-01

    Full Text Available diagnostic instrumentation. In such cases the synchronisation of the diagnostics is paramount. Here, the camera generated into an info.txt file is utilised to determine the time of each frame with respect to the system trigger, enabling synchronisation...

  2. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  3. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2000-01-01

    SCK-CEN's research and development programme on instrumentation aims at evaluating the potentials of new instrumentation technologies under the severe constraints of a nuclear application. It focuses on the tolerance of sensors to high radiation doses, including optical fibre sensors, and on the related intelligent data processing needed to cope with the nuclear constraints. Main achievements in these domains in 1999 are summarised

  4. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2001-04-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor.

  5. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2001-01-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor

  6. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  7. Framing of mobility items: a source of poor agreement between preference-based health-related quality of life instruments in a population of individuals receiving assisted ventilation.

    Science.gov (United States)

    Hannan, Liam M; Whitehurst, David G T; Bryan, Stirling; Road, Jeremy D; McDonald, Christine F; Berlowitz, David J; Howard, Mark E

    2017-06-01

    To explore the influence of descriptive differences in items evaluating mobility on index scores generated from two generic preference-based health-related quality of life (HRQoL) instruments. The study examined cross-sectional data from a postal survey of individuals receiving assisted ventilation in two state/province-wide home mechanical ventilation services, one in British Columbia, Canada and the other in Victoria, Australia. The Assessment of Quality of Life 8-dimension (AQoL-8D) and the EQ-5D-5L were included in the data collection. Graphical illustrations, descriptive statistics, and measures of agreement [intraclass correlation coefficients (ICCs) and Bland-Altman plots] were examined using index scores derived from both instruments. Analyses were performed on the full sample as well as subgroups defined according to respondents' self-reported ability to walk. Of 868 individuals receiving assisted ventilation, 481 (55.4%) completed the questionnaire. Mean index scores were 0.581 (AQoL-8D) and 0.566 (EQ-5D-5L) with 'moderate' agreement demonstrated between the two instruments (ICC = 0.642). One hundred fifty-nine (33.1%) reported level 5 ('I am unable to walk about') on the EQ-5D-5L Mobility item. The walking status of respondents had a marked influence on the comparability of index scores, with a larger mean difference (0.206) and 'slight' agreement (ICC = 0.386) observed when the non-ambulant subgroup was evaluated separately. This study provides further evidence that between-measure discrepancies between preference-based HRQoL instruments are related in part to the framing of mobility-related items. Longitudinal studies are necessary to determine the responsiveness of preference-based HRQoL instruments in cohorts that include non-ambulant individuals.

  8. Caliste-SO X-ray micro-camera for the STIX instrument on-board Solar Orbiter space mission

    International Nuclear Information System (INIS)

    Meuris, A.; Hurford, G.; Bednarzik, M.; Limousin, O.; Gevin, O.; Le Mer, I.; Martignac, J.; Horeau, B.; Grimm, O.; Resanovic, R.; Krucker, S.; Orleański, P.

    2012-01-01

    The Spectrometer Telescope for Imaging X-rays (STIX) is an instrument on the Solar-Orbiter space mission that performs hard X-ray imaging spectroscopy of solar flares. It consists of 32 collimators with grids and 32 spectrometer units called Caliste-SO for indirect Fourier-transform imaging. Each Caliste-SO device integrates a 1 cm 2 CdTe pixel sensor with a low-noise low-power analog front-end ASIC and circuits for supply regulation and filtering. The ASIC named IDeF-X HD is designed by CEA/Irfu (France) whereas CdTe-based semiconductor detectors are provided by the Laboratory for Micro- and Nanotechnology, Paul Scherrer Institute (Switzerland). The design of the hybrid, based on 3D Plus technology (France), is well suited for STIX spectroscopic requirements (1 keV FWHM at 6 keV, 4 keV low-level threshold) and system constraints (4 W power and 5 kg mass). The performance of the sub-assemblies and the design of the first Caliste-SO prototype are presented.

  9. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2002-01-01

    SCK-CEN's R and D programme on instrumentation involves the development of advanced instrumentation systems for nuclear applications as well as the assessment of the performance of these instruments in a radiation environment. Particular emphasis is on the use of optical fibres as umbilincal links of a remote handling unit for use during maintanance of a fusion reacor, studies on the radiation hardening of plasma diagnostic systems; investigations on new instrumentation for the future MYRRHA accelerator driven system; space applications related to radiation-hardened lenses; the development of new approaches for dose, temperature and strain measurements; the assessment of radiation-hardened sensors and motors for remote handling tasks and studies of dose measurement systems including the use of optical fibres. Progress and achievements in these areas for 2001 are described

  10. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2002-04-01

    SCK-CEN's R and D programme on instrumentation involves the development of advanced instrumentation systems for nuclear applications as well as the assessment of the performance of these instruments in a radiation environment. Particular emphasis is on the use of optical fibres as umbilincal links of a remote handling unit for use during maintanance of a fusion reacor, studies on the radiation hardening of plasma diagnostic systems; investigations on new instrumentation for the future MYRRHA accelerator driven system; space applications related to radiation-hardened lenses; the development of new approaches for dose, temperature and strain measurements; the assessment of radiation-hardened sensors and motors for remote handling tasks and studies of dose measurement systems including the use of optical fibres. Progress and achievements in these areas for 2001 are described.

  11. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2000-07-01

    SCK-CEN's research and development programme on instrumentation aims at evaluating the potentials of new instrumentation technologies under the severe constraints of a nuclear application. It focuses on the tolerance of sensors to high radiation doses, including optical fibre sensors, and on the related intelligent data processing needed to cope with the nuclear constraints. Main achievements in these domains in 1999 are summarised.

  12. Instrumentation

    International Nuclear Information System (INIS)

    Umminger, K.

    2008-01-01

    A proper measurement of the relevant single and two-phase flow parameters is the basis for the understanding of many complex thermal-hydraulic processes. Reliable instrumentation is therefore necessary for the interaction between analysis and experiment especially in the field of nuclear safety research where postulated accident scenarios have to be simulated in experimental facilities and predicted by complex computer code systems. The so-called conventional instrumentation for the measurement of e. g. pressures, temperatures, pressure differences and single phase flow velocities is still a solid basis for the investigation and interpretation of many phenomena and especially for the understanding of the overall system behavior. Measurement data from such instrumentation still serves in many cases as a database for thermal-hydraulic system codes. However some special instrumentation such as online concentration measurement for boric acid in the water phase or for non-condensibles in steam atmosphere as well as flow visualization techniques were further developed and successfully applied during the recent years. Concerning the modeling needs for advanced thermal-hydraulic codes, significant advances have been accomplished in the last few years in the local instrumentation technology for two-phase flow by the application of new sensor techniques, optical or beam methods and electronic technology. This paper will give insight into the current state of instrumentation technology for safety-related thermohydraulic experiments. Advantages and limitations of some measurement processes and systems will be indicated as well as trends and possibilities for further development. Aspects of instrumentation in operating reactors will also be mentioned.

  13. Instruments

    International Nuclear Information System (INIS)

    Buehrer, W.

    1996-01-01

    The present paper mediates a basic knowledge of the most commonly used experimental techniques. We discuss the principles and concepts necessary to understand what one is doing if one performs an experiment on a certain instrument. (author) 29 figs., 1 tab., refs

  14. Framing the frame

    Directory of Open Access Journals (Sweden)

    Todd McElroy

    2007-08-01

    Full Text Available We examined how the goal of a decision task influences the perceived positive, negative valence of the alternatives and thereby the likelihood and direction of framing effects. In Study 1 we manipulated the goal to increase, decrease or maintain the commodity in question and found that when the goal of the task was to increase the commodity, a framing effect consistent with those typically observed in the literature was found. When the goal was to decrease, a framing effect opposite to the typical findings was observed whereas when the goal was to maintain, no framing effect was found. When we examined the decisions of the entire population, we did not observe a framing effect. In Study 2, we provided participants with a similar decision task except in this situation the goal was ambiguous, allowing us to observe participants' self-imposed goals and how they influenced choice preferences. The findings from Study 2 demonstrated individual variability in imposed goal and provided a conceptual replication of Study 1. %need keywords

  15. Instrumentation

    International Nuclear Information System (INIS)

    Muehllehner, G.; Colsher, J.G.

    1982-01-01

    This chapter reviews the parameters which are important to positron-imaging instruments. It summarizes the options which various groups have explored in designing tomographs and the methods which have been developed to overcome some of the limitations inherent in the technique as well as in present instruments. The chapter is not presented as a defense of positron imaging versus single-photon or other imaging modality, neither does it contain a description of various existing instruments, but rather stresses their common properties and problems. Design parameters which are considered are resolution, sampling requirements, sensitivity, methods of eliminating scattered radiation, random coincidences and attenuation. The implementation of these parameters is considered, with special reference to sampling, choice of detector material, detector ring diameter and shielding and variations in point spread function. Quantitation problems discussed are normalization, and attenuation and random corrections. Present developments mentioned are noise reduction through time-of-flight-assisted tomography and signal to noise improvements through high intrinsic resolution. Extensive bibliography. (U.K.)

  16. Compliance Framing - Framing Compliance

    OpenAIRE

    Lutz-Ulrich Haack; Martin C. Reimann

    2012-01-01

    Corporations have to install various organizational measures to comply with legal as well as internal guidelines systematically. Compliance management systems have the challenging task to make use of an internal compliance-marketing approach in order to ensure not only an adequate but also effective compliance-culture. Compliance-literature and findings of persuasive goal-framing-theory give opposite implications for establishing a rather values- versus rule-based compliance-culture respectiv...

  17. Framing Camera Improvements and hydrodynamic Experiments

    National Research Council Canada - National Science Library

    Drake, R. P

    2007-01-01

    .... We also propose to participate in hydrodynamic experiments at NRL whenever they occur, to prepare for an experiment for NIKE to study the onset of turbulence via the Kelvin Helmholtz instability...

  18. Frames and semi-frames

    International Nuclear Information System (INIS)

    Antoine, Jean-Pierre; Balazs, Peter

    2011-01-01

    Loosely speaking, a semi-frame is a generalized frame for which one of the frame bounds is absent. More precisely, given a total sequence in a Hilbert space, we speak of an upper (resp. lower) semi-frame if only the upper (resp. lower) frame bound is valid. Equivalently, for an upper semi-frame, the frame operator is bounded, but has an unbounded inverse, whereas a lower semi-frame has an unbounded frame operator, with a bounded inverse. We study mostly upper semi-frames, both in the continuous and discrete case, and give some remarks for the dual situation. In particular, we show that reconstruction is still possible in certain cases.

  19. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  20. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  1. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  2. Quantum frames

    Science.gov (United States)

    Brown, Matthew J.

    2014-02-01

    The framework of quantum frames can help unravel some of the interpretive difficulties i the foundation of quantum mechanics. In this paper, I begin by tracing the origins of this concept in Bohr's discussion of quantum theory and his theory of complementarity. Engaging with various interpreters and followers of Bohr, I argue that the correct account of quantum frames must be extended beyond literal space-time reference frames to frames defined by relations between a quantum system and the exosystem or external physical frame, of which measurement contexts are a particularly important example. This approach provides superior solutions to key EPR-type measurement and locality paradoxes.

  3. Media Framing

    DEFF Research Database (Denmark)

    Pedersen, Rasmus T.

    2017-01-01

    The concept of media framing refers to the way in which the news media organize and provide meaning to a news story by emphasizing some parts of reality and disregarding other parts. These patterns of emphasis and exclusion in news coverage create frames that can have considerable effects on news...... consumers’ perceptions and attitudes regarding the given issue or event. This entry briefly elaborates on the concept of media framing, presents key types of media frames, and introduces the research on media framing effects....

  4. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  5. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  6. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  7. New Sensors for Cultural Heritage Metric Survey: The ToF Cameras

    Directory of Open Access Journals (Sweden)

    Filiberto Chiabrando

    2011-12-01

    Full Text Available ToF cameras are new instruments based on CCD/CMOS sensors which measure distances instead of radiometry. The resulting point clouds show the same properties (both in terms of accuracy and resolution of the point clouds acquired by means of traditional LiDAR devices. ToF cameras are cheap instruments (less than 10.000 € based on video real time distance measurements and can represent an interesting alternative to the more expensive LiDAR instruments. In addition, the limited weight and dimensions of ToF cameras allow a reduction of some practical problems such as transportation and on-site management. Most of the commercial ToF cameras use the phase-shift method to measure distances. Due to the use of only one wavelength, most of them have limited range of application (usually about 5 or 10 m. After a brief description of the main characteristics of these instruments, this paper explains and comments the results of the first experimental applications of ToF cameras in Cultural Heritage 3D metric survey.  The possibility to acquire more than 30 frames/s and future developments of these devices in terms of use of more than one wavelength to overcome the ambiguity problem allow to foresee new interesting applications.

  8. Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing

    Science.gov (United States)

    Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio

    2013-01-01

    The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.

  9. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  10. Quotation and Framing

    DEFF Research Database (Denmark)

    Petersen, Nils Holger

    2010-01-01

    . In Black Angels the composer – among other well-known pieces of music – quotes the medieval dies irae sequence and the second movement of Schubert’s string quartet in D minor (D. 810). The musical and intermedial references are framed with striking modernistic sounds exploring instrumental possibilities...

  11. Framing theory

    NARCIS (Netherlands)

    de Vreese, C.H.; Lecheler, S.; Mazzoleni, G.; Barnhurst, K.G.; Ikeda, K.; Maia, R.C.M.; Wessler, H.

    2016-01-01

    Political issues can be viewed from different perspectives and they can be defined differently in the news media by emphasizing some aspects and leaving others aside. This is at the core of news framing theory. Framing originates within sociology and psychology and has become one of the most used

  12. On Framing

    DEFF Research Database (Denmark)

    Peder Pedersen, Claus

    2018-01-01

    On framing as artistic and conceptual tool in the works of Claudia Carbone. Contribution to exhibition at the Aarhus School of Architecture.......On framing as artistic and conceptual tool in the works of Claudia Carbone. Contribution to exhibition at the Aarhus School of Architecture....

  13. Framing politics

    NARCIS (Netherlands)

    Lecheler, S.K.

    2010-01-01

    This dissertation supplies a number of research findings that add to a theory of news framing effects, and also to the understanding of the role media effects play in political communication. We show that researchers must think more about what actually constitutes a framing effect, and that a

  14. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  15. Aircraft Instrument, Fire Protection, Warning, Communication, Navigation and Cabin Atmosphere Control System (Course Outline), Aviation Mechanics 3 (Air Frame): 9067.04.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    This document presents an outline for a 135-hour course designed to familiarize the student with manipulative skills and theoretical knowledge concerning aircraft instrument systems like major flight and engine instruments; fire protection and fire fighting systems; warning systems and navigation systems; aircraft cabin control systems, such as…

  16. Competitive Framing

    OpenAIRE

    Ran Spiegler

    2014-01-01

    I present a simple framework for modeling two-firm market competition when consumer choice is "frame-dependent", and firms use costless "marketing messages" to influence the consumer's frame. This framework embeds several recent models in the "behavioral industrial organization" literature. I identify a property that consumer choice may satisfy, which extends the concept of Weighted Regularity due to Piccione and Spiegler (2012), and provide a characterization of Nash equilibria under this pr...

  17. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  18. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  19. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  20. A new high-speed IR camera system

    Science.gov (United States)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  1. Development of a portable X-ray and gamma-ray detector instrument and imaging camera for use in radioactive and hazardous materials management

    International Nuclear Information System (INIS)

    Scyoc, J.M. van; James, R.B.; Anderson, R.J.

    1997-08-01

    The overall goal of this LDRD project was to develop instruments for use in the management of radioactive and hazardous wastes. Devices for identifying and imaging such wastes are critical to developing environmental remediation strategies. Field portable units are required to enable the on-site analysis of solids, liquids, and gas effluents. Red mercuric iodide (α-HgI 2 ) is a semiconductor material that can be operated as a high-energy-resolution radiation detector at ambient temperatures. This property provides the needed performance of conventional germanium- and silicon-based devices, while eliminating the need for the cryogenic cooling of such instruments. The first year of this project focused on improving the materials properties of the mercuric iodide to enable the new sensor technology; in particular the charge carrier traps limiting device performance were determined and eliminated. The second year involved the development of a field portable x-ray fluorescence analyzer for compositional analyses. The third and final year of the project focused on the development of imaging sensors to provide the capability for mapping the composition of waste masses. This project resulted in instruments useful not only for managing hazardous and radioactive wastes, but also in a variety of industrial and national security applications

  2. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  3. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence

  4. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  5. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  6. Framing scales and scaling frames

    NARCIS (Netherlands)

    van Lieshout, M.; Dewulf, A.; Aarts, N.; Termeer, K.

    2009-01-01

    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment

  7. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  8. Framing Innovation

    DEFF Research Database (Denmark)

    Haase, Louise Møller; Laursen, Linda Nhu

    2017-01-01

    Designing a remarkable product innovation is a difficult challenge, which businesses today continuously are striving to tackle. This challenge is particularly present in the early phase of innovation, where the main product concept and frames of the innovation is determined. As a main challenge...... in the early phase is the reasoning process; innovation team are faced with open-ended ill-defines problems, where they need to make decisions about an unknown future having only incomplete, ambiguous and contradicting insights available. We study the reasoning of experts, how they frame to make sense of all...... the insights and create a basis for decision making in relation to a new project. Based on case studies of five innovative products from various industries, we suggest a Product Reasoning Model for understanding reasoning and envisioning of new product innovations in the early phases...

  9. Framing Innovation

    DEFF Research Database (Denmark)

    Haase, Louise Møller; Laursen, Linda Nhu

    2017-01-01

    Designing a remarkable product innovation is a difficult challenge, which businesses today continuously are striving to tackle. This challenge is particularly present in the early phase of innovation, where the main product concept and frames of the innovation is determined. As a main challenge...... in the early phase is the reasoning process; innovation team are faced with open- ended ill-defines problems, where they need to make decisions about an unknown future having only incomplete, ambiguous and contradicting insights available. We study the reasoning of experts, how they frame to make sense of all...... the insights and create a basis for decision making in relation to a new project. Based on case studies of five innovative products from various industries, we suggest a Product Reasoning Model for understanding reasoning and envisioning of new product innovations in the early phases of innovation....

  10. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  11. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  12. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  13. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  14. The moving camera in Flimmer

    DEFF Research Database (Denmark)

    Juel, Henrik

    2018-01-01

    No human actors are seen, but Flimmer still seethes with motion, both motion within the frame and motion of the frame. The subtle camera movements, perhaps at first unnoticed, play an important role in creating the poetic mood of the film, curious, playful and reflexive.......No human actors are seen, but Flimmer still seethes with motion, both motion within the frame and motion of the frame. The subtle camera movements, perhaps at first unnoticed, play an important role in creating the poetic mood of the film, curious, playful and reflexive....

  15. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the

  16. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  17. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  18. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  19. Dynamic Artificial Potential Fields for Autonomous Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    the implementation and evaluation of Artificial Potential Fields for automatic camera placement. We first describe the re- casting of the frame composition problem as a solution to a two particles suspended in an Artificial Potential Field. We demonstrate the application of this technique to control both camera...

  20. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  1. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    Science.gov (United States)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  2. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  3. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  4. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  5. Robotic-surgical instrument wrist pose estimation.

    Science.gov (United States)

    Fabel, Stephan; Baek, Kyungim; Berkelman, Peter

    2010-01-01

    The Compact Lightweight Surgery Robot from the University of Hawaii includes two teleoperated instruments and one endoscope manipulator which act in accord to perform assisted interventional medicine. The relative positions and orientations of the robotic instruments and endoscope must be known to the teleoperation system so that the directions of the instrument motions can be controlled to correspond closely to the directions of the motions of the master manipulators, as seen by the the endoscope and displayed to the surgeon. If the manipulator bases are mounted in known locations and all manipulator joint variables are known, then the necessary coordinate transformations between the master and slave manipulators can be easily computed. The versatility and ease of use of the system can be increased, however, by allowing the endoscope or instrument manipulator bases to be moved to arbitrary positions and orientations without reinitializing each manipulator or remeasuring their relative positions. The aim of this work is to find the pose of the instrument end effectors using the video image from the endoscope camera. The P3P pose estimation algorithm is used with a Levenberg-Marquardt optimization to ensure convergence. The correct transformations between the master and slave coordinate frames can then be calculated and updated when the bases of the endoscope or instrument manipulators are moved to new, unknown, positions at any time before or during surgical procedures.

  6. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  7. On frame multiresolution analysis

    DEFF Research Database (Denmark)

    Christensen, Ole

    2003-01-01

    We use the freedom in frame multiresolution analysis to construct tight wavelet frames (even in the case where the refinable function does not generate a tight frame). In cases where a frame multiresolution does not lead to a construction of a wavelet frame we show how one can nevertheless...

  8. Nonmonotonic belief state frames and reasoning frames

    NARCIS (Netherlands)

    Engelfriet, J.; Herre, H.; Treur, J.

    1995-01-01

    In this paper five levels of specification of nonmonotonic reasoning are distinguished. The notions of semantical frame, belief state frame and reasoning frame are introduced and used as a semantical basis for the first three levels. Moreover, the semantical connections between the levels are

  9. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  10. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  11. Stroboscope Based Synchronization of Full Frame CCD Sensors.

    Science.gov (United States)

    Shen, Liang; Feng, Xiaobing; Zhang, Yuan; Shi, Min; Zhu, Dengming; Wang, Zhaoqi

    2017-04-07

    The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD) consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equipment, the proposed approach greatly reduces the cost by using inexpensive CCD cameras and one stroboscope. The results show that our method could reach a high accuracy much better than the frame-level synchronization of traditional software methods.

  12. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  13. Hardware accelerator design for change detection in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  14. An ebCMOS camera system for marine bioluminescence observation: The LuSEApher prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dominjon, A., E-mail: a.dominjon@ipnl.in2p3.fr [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Ageron, M. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Billault, M.; Brunner, J. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Calabria, P. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Chabanat, E. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Chaize, D.; Doan, Q.T.; Guerin, C.; Houles, J.; Vagneron, L. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France)

    2012-12-11

    The ebCMOS camera, called LuSEApher, is a marine bioluminescence recorder device adapted to extreme low light level. This prototype is based on the skeleton of the LUSIPHER camera system originally developed for fluorescence imaging. It has been installed at 2500 m depth off the Mediterranean shore on the site of the ANTARES neutrino telescope. The LuSEApher camera is mounted on the Instrumented Interface Module connected to the ANTARES network for environmental science purposes (European Seas Observatory Network). The LuSEApher is a self-triggered photo detection system with photon counting ability. The presentation of the device is given and its performances such as the single photon reconstruction, noise performances and trigger strategy are presented. The first recorded movies of bioluminescence are analyzed. To our knowledge, those types of events have never been obtained with such a sensitivity and such a frame rate. We believe that this camera concept could open a new window on bioluminescence studies in the deep sea.

  15. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  16. Explosive Transient Camera (ETC) Program

    Science.gov (United States)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  17. PhC-4 new high-speed camera with mirror scanning

    International Nuclear Information System (INIS)

    Daragan, A.O.; Belov, B.G.

    1979-01-01

    The description of the optical system and the construction of the high-speed PhC-4 photographic camera with mirror scanning of the continuously operating type is given. The optical system of the camera is based on the foursided rotating mirror, two optical inlets and two working sectors. The PhC-4 camera provides the framing rate up to 600 thousand frames per second. (author)

  18. Shared Focal Plane Investigation for Serial Frame Cameras.

    Science.gov (United States)

    1980-03-01

    capability will be restored. 41. -.. TrABLE 1-1 SYSTEM LEADING P) ARTICULARS Lens Focal Length (inches) Range (ft) Contrast 12 18 24 Coverage 22.1...can be expected that signature bands will be apparent in the imagery. Such bands are at best distracting and at worst hindrances to image interpretation

  19. Prime tight frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Miller, Christopher; Okoudjou, Kasso A.

    2014-01-01

    to suggest effective analysis and synthesis computation strategies for such frames. Finally, we describe all prime frames constructed from the spectral tetris method, and, as a byproduct, we obtain a characterization of when the spectral tetris construction works for redundancies below two.......We introduce a class of finite tight frames called prime tight frames and prove some of their elementary properties. In particular, we show that any finite tight frame can be written as a union of prime tight frames. We then characterize all prime harmonic tight frames and use thischaracterization...

  20. Changing quantum reference frames

    OpenAIRE

    Palmer, Matthew C.; Girelli, Florian; Bartlett, Stephen D.

    2013-01-01

    We consider the process of changing reference frames in the case where the reference frames are quantum systems. We find that, as part of this process, decoherence is necessarily induced on any quantum system described relative to these frames. We explore this process with examples involving reference frames for phase and orientation. Quantifying the effect of changing quantum reference frames serves as a first step in developing a relativity principle for theories in which all objects includ...

  1. SPAD array chips with full frame readout for crystal characterization

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Peter; Blanco, Roberto; Sacco, Ilaria; Ritzert, Michael [Heidelberg University (Germany); Weyers, Sascha [Fraunhofer Institute for Microelectronic Circuits and Systems (Germany)

    2015-05-18

    We present single photon sensitive 2D camera chips containing 88x88 avalanche photo diodes which can be read out in full frame mode with up to 400.000 frames per second. The sensors have an imaging area of ~5mm x 5mm covered by square pixels of ~56µm x 56µm with a ~55% fill factor in the latest chip generation. The chips contain a self triggering logic with selectable (column) multiplicities of up to >=4 hits within an adjustable coincidence time window. The photon accumulation time window is programmable as well. First prototypes have demonstrated low dark count rates of <50kHz/mm2 (SPAD area) at 10 degree C for 10% masked pixels. One chip version contains an automated readout of the photon cluster position. The readout of the detailed photon distribution for single events allows the characterization of light sharing, optical crosstalk etc., in crystals or crystal arrays as they are used in PET instrumentation. This knowledge could lead to improvements in spatial or temporal resolution.

  2. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  3. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  4. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  5. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  6. Four-frame gated optical imager with 120-ps resolution

    International Nuclear Information System (INIS)

    Young, P.E.; Hares, J.D.; Kilkenny, J.D.; Phillion, D.W.; Campbell, E.M.

    1988-04-01

    In this paper we describe the operation and applications of a framing camera capable of four separate two-dimensional images with each frame having a 120-ps gate width. Fast gating of a single frame is accomplished by using a wafer image intensifier tube in which the cathode is capacitively coupled to an external electrode placed outside of the photocathode of the tube. This electrode is then pulsed relative to the microchannel plate by a narrow (120 ps), high-voltage pulse. Multiple frames are obtained by using multiple gated tubes which share a single bias supply and pulser with relative gate times selected by the cable lengths between the tubes and the pulser. A beamsplitter system has been constructed which produces a separate image for each tube from a single scene. Applications of the framing camera to inertial confinement fusion experiments are discussed

  7. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    International Nuclear Information System (INIS)

    Barrera, E.; Ruiz, M.; Sanz, D.; Vega, J.; Castro, R.; Juárez, E.; Salvador, R.

    2014-01-01

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  8. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  9. Photogrammetry of the Map Instrument in a Cryogenic Vacuum Environment

    Science.gov (United States)

    Hill, M.; Packard, E.; Pazar, R.

    2000-01-01

    MAP Instrument requirements dictated that the instruments Focal Plane Assembly (FPA) and Thermal Reflector System (TRS) maintain a high degree of structural integrity at operational temperatures (photogrammetry camera. This paper will discuss MAP's Instrument requirements, how those requirements were verified using photogrammetry, and the test setup used to provide the environment and camera movement needed to verify the instrument's requirements.

  10. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    Yusoff, Ahmad Razali; Ariff, Mohd Farid Mohd; Idris, Khairulnizam M; Majid, Zulkepli; Setan, Halim; Chong, Albert K

    2014-01-01

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  11. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  12. Applying UV cameras for SO2 detection to distant or optically thick volcanic plumes

    Science.gov (United States)

    Kern, Christoph; Werner, Cynthia; Elias, Tamar; Sutton, A. Jeff; Lübcke, Peter

    2013-01-01

    Ultraviolet (UV) camera systems represent an exciting new technology for measuring two dimensional sulfur dioxide (SO2) distributions in volcanic plumes. The high frame rate of the cameras allows the retrieval of SO2 emission rates at time scales of 1 Hz or higher, thus allowing the investigation of high-frequency signals and making integrated and comparative studies with other high-data-rate volcano monitoring techniques possible. One drawback of the technique, however, is the limited spectral information recorded by the imaging systems. Here, a framework for simulating the sensitivity of UV cameras to various SO2 distributions is introduced. Both the wavelength-dependent transmittance of the optical imaging system and the radiative transfer in the atmosphere are modeled. The framework is then applied to study the behavior of different optical setups and used to simulate the response of these instruments to volcanic plumes containing varying SO2 and aerosol abundances located at various distances from the sensor. Results show that UV radiative transfer in and around distant and/or optically thick plumes typically leads to a lower sensitivity to SO2 than expected when assuming a standard Beer–Lambert absorption model. Furthermore, camera response is often non-linear in SO2 and dependent on distance to the plume and plume aerosol optical thickness and single scatter albedo. The model results are compared with camera measurements made at Kilauea Volcano (Hawaii) and a method for integrating moderate resolution differential optical absorption spectroscopy data with UV imagery to retrieve improved SO2 column densities is discussed.

  13. The contribution to the modal analysis using an infrared camera

    Directory of Open Access Journals (Sweden)

    Dekys Vladimír

    2018-01-01

    Full Text Available The paper deals with modal analysis using an infrared camera. The test objects were excited by the modal exciter with narrowband noise and the response was registered as a frame sequence by the high speed infrared camera FLIR SC7500. The resonant frequencies and the modal shapes were determined from the infrared spectrum recordings. Lock-in technology has also been used. The experimental results were compared with calculated natural frequencies and modal shapes.

  14. Making students' frames explicit

    DEFF Research Database (Denmark)

    Nielsen, Louise Møller; Hansen, Poul Henrik Kyvsgaard

    2016-01-01

    Framing is a vital part of the design and innovation process. Frames are cognitive shortcuts (i.e. metaphors) that enable designers to connect insights about i.e. market opportunities and users needs with a set of solution principles and to test if this connection makes sense. Until now, framing...

  15. High-Speed Videography Instrumentation And Procedures

    Science.gov (United States)

    Miller, C. E.

    1982-02-01

    High-speed videography has been an electronic analog of low-speed film cameras, but having the advantages of instant-replay and simplicity of operation. Recent advances have pushed frame-rates into the realm of the rotating prism camera. Some characteristics of videography systems are discussed in conjunction with applications in sports analysis, and with sports equipment testing.

  16. A wide field X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.; Turner, M.J.L.; Willingale, R.

    1980-01-01

    A wide field of view X-ray camera based on the Dicke or Coded Mask principle is described. It is shown that this type of instrument is more sensitive than a pin-hole camera, or than a scanning survey of a given region of sky for all wide field conditions. The design of a practical camera is discussed and the sensitivity and performance of the chosen design are evaluated by means of computer simulations. The Wiener Filter and Maximum Entropy methods of deconvolution are described and these methods are compared with each other and cross-correlation using data from the computer simulations. It is shown that the analytic expressions for sensitivity used by other workers are confirmed by the simulations, and that ghost images caused by incomplete coding can be substantially eliminated by the use of the Wiener Filter and the Maximum Entropy Method, with some penalty in computer time for the latter. The cyclic mask configuration is compared with the simple mask camera. It is shown that when the diffuse X-ray background dominates, the simple system is more sensitive and has the better angular resolution. When sources dominate the simple system is less sensitive. It is concluded that the simple coded mask camera is the best instrument for wide field imaging of the X-ray sky. (orig.)

  17. CARMENES instrument overview

    Science.gov (United States)

    Quirrenbach, A.; Amado, P. J.; Caballero, J. A.; Mundt, R.; Reiners, A.; Ribas, I.; Seifert, W.; Abril, M.; Aceituno, J.; Alonso-Floriano, F. J.; Ammler-von Eiff, M.; Antona Jiménez, R.; Anwand-Heerwart, H.; Azzaro, M.; Bauer, F.; Barrado, D.; Becerril, S.; Béjar, V. J. S.; Benítez, D.; Berdiñas, Z. M.; Cárdenas, M. C.; Casal, E.; Claret, A.; Colomé, J.; Cortés-Contreras, M.; Czesla, S.; Doellinger, M.; Dreizler, S.; Feiz, C.; Fernández, M.; Galadí, D.; Gálvez-Ortiz, M. C.; García-Piquer, A.; García-Vargas, M. L.; Garrido, R.; Gesa, L.; Gómez Galera, V.; González Álvarez, E.; González Hernández, J. I.; Grözinger, U.; Guàrdia, J.; Guenther, E. W.; de Guindos, E.; Gutiérrez-Soto, J.; Hagen, H.-J.; Hatzes, A. P.; Hauschildt, P. H.; Helmling, J.; Henning, T.; Hermann, D.; Hernández Castaño, L.; Herrero, E.; Hidalgo, D.; Holgado, G.; Huber, A.; Huber, K. F.; Jeffers, S.; Joergens, V.; de Juan, E.; Kehr, M.; Klein, R.; Kürster, M.; Lamert, A.; Lalitha, S.; Laun, W.; Lemke, U.; Lenzen, R.; López del Fresno, Mauro; López Martí, B.; López-Santiago, J.; Mall, U.; Mandel, H.; Martín, E. L.; Martín-Ruiz, S.; Martínez-Rodríguez, H.; Marvin, C. J.; Mathar, R. J.; Mirabet, E.; Montes, D.; Morales Muñoz, R.; Moya, A.; Naranjo, V.; Ofir, A.; Oreiro, R.; Pallé, E.; Panduro, J.; Passegger, V.-M.; Pérez-Calpena, A.; Pérez Medialdea, D.; Perger, M.; Pluto, M.; Ramón, A.; Rebolo, R.; Redondo, P.; Reffert, S.; Reinhardt, S.; Rhode, P.; Rix, H.-W.; Rodler, F.; Rodríguez, E.; Rodríguez-López, C.; Rodríguez-Pérez, E.; Rohloff, R.-R.; Rosich, A.; Sánchez-Blanco, E.; Sánchez Carrasco, M. A.; Sanz-Forcada, J.; Sarmiento, L. F.; Schäfer, S.; Schiller, J.; Schmidt, C.; Schmitt, J. H. M. M.; Solano, E.; Stahl, O.; Storz, C.; Stürmer, J.; Suárez, J. C.; Ulbrich, R. G.; Veredas, G.; Wagner, K.; Winkler, J.; Zapatero Osorio, M. R.; Zechmeister, M.; Abellán de Paco, F. J.; Anglada-Escudé, G.; del Burgo, C.; Klutsch, A.; Lizon, J. L.; López-Morales, M.; Morales, J. C.; Perryman, M. A. C.; Tulloch, S. M.; Xu, W.

    2014-07-01

    This paper gives an overview of the CARMENES instrument and of the survey that will be carried out with it during the first years of operation. CARMENES (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Echelle Spectrographs) is a next-generation radial-velocity instrument under construction for the 3.5m telescope at the Calar Alto Observatory by a consortium of eleven Spanish and German institutions. The scientific goal of the project is conducting a 600-night exoplanet survey targeting ~ 300 M dwarfs with the completed instrument. The CARMENES instrument consists of two separate echelle spectrographs covering the wavelength range from 0.55 to 1.7 μm at a spectral resolution of R = 82,000, fed by fibers from the Cassegrain focus of the telescope. The spectrographs are housed in vacuum tanks providing the temperature-stabilized environments necessary to enable a 1 m/s radial velocity precision employing a simultaneous calibration with an emission-line lamp or with a Fabry-Perot etalon. For mid-M to late-M spectral types, the wavelength range around 1.0 μm (Y band) is the most important wavelength region for radial velocity work. Therefore, the efficiency of CARMENES has been optimized in this range. The CARMENES instrument consists of two spectrographs, one equipped with a 4k x 4k pixel CCD for the range 0.55 - 1.05 μm, and one with two 2k x 2k pixel HgCdTe detectors for the range from 0.95 - 1.7μm. Each spectrograph will be coupled to the 3.5m telescope with two optical fibers, one for the target, and one for calibration light. The front end contains a dichroic beam splitter and an atmospheric dispersion corrector, to feed the light into the fibers leading to the spectrographs. Guiding is performed with a separate camera; on-axis as well as off-axis guiding modes are implemented. Fibers with octagonal cross-section are employed to ensure good stability of the output in the presence of residual guiding errors. The

  18. Multivariate wavelet frames

    CERN Document Server

    Skopina, Maria; Protasov, Vladimir

    2016-01-01

    This book presents a systematic study of multivariate wavelet frames with matrix dilation, in particular, orthogonal and bi-orthogonal bases, which are a special case of frames. Further, it provides algorithmic methods for the construction of dual and tight wavelet frames with a desirable approximation order, namely compactly supported wavelet frames, which are commonly required by engineers. It particularly focuses on methods of constructing them. Wavelet bases and frames are actively used in numerous applications such as audio and graphic signal processing, compression and transmission of information. They are especially useful in image recovery from incomplete observed data due to the redundancy of frame systems. The construction of multivariate wavelet frames, especially bases, with desirable properties remains a challenging problem as although a general scheme of construction is well known, its practical implementation in the multidimensional setting is difficult. Another important feature of wavelet is ...

  19. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  20. Stroboscope Based Synchronization of Full Frame CCD Sensors

    OpenAIRE

    Shen, Liang; Feng, Xiaobing; Zhang, Yuan; Shi, Min; Zhu, Dengming; Wang, Zhaoqi

    2017-01-01

    The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD) consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equi...

  1. What about getting physiological information into dynamic gamma camera studies

    International Nuclear Information System (INIS)

    Kiuru, A.; Nickles, R. J.; Holden, J. E.; Polcyn, R. E.

    1976-01-01

    A general technique has been developed for the multiplexing of time dependent analog signals into the individual frames of a gamma camera dynamic function study. A pulse train, frequency-modulated by the physiological signal, is capacitively coupled to the preamplifier servicing anyone of the outer phototubes of the camera head. These negative tail pulses imitate photoevents occuring at a point outside of the camera field of view, chosen to occupy a data cell in an unused corner of the computer-stored square image. By defining a region of interest around this cell, the resulting time-activity curve displays the physiological variable in temporal synchrony with the radiotracer distribution. (author)

  2. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  3. Changing climate, changing frames

    International Nuclear Information System (INIS)

    Vink, Martinus J.; Boezeman, Daan; Dewulf, Art; Termeer, Catrien J.A.M.

    2013-01-01

    Highlights: ► We show development of flood policy frames in context of climate change attention. ► Rising attention on climate change influences traditional flood policy framing. ► The new framing employs global-scale scientific climate change knowledge. ► With declining attention, framing disregards climate change, using local knowledge. ► We conclude that frames function as sensemaking devices selectively using knowledge. -- Abstract: Water management and particularly flood defence have a long history of collective action in low-lying countries like the Netherlands. The uncertain but potentially severe impacts of the recent climate change issue (e.g. sea level rise, extreme river discharges, salinisation) amplify the wicked and controversial character of flood safety policy issues. Policy proposals in this area generally involve drastic infrastructural works and long-term investments. They face the difficult challenge of framing problems and solutions in a publicly acceptable manner in ever changing circumstances. In this paper, we analyse and compare (1) how three key policy proposals publicly frame the flood safety issue, (2) the knowledge referred to in the framing and (3) how these frames are rhetorically connected or disconnected as statements in a long-term conversation. We find that (1) framings of policy proposals differ in the way they depict the importance of climate change, the relevant timeframe and the appropriate governance mode; (2) knowledge is selectively mobilised to underpin the different frames and (3) the frames about these proposals position themselves against the background of the previous proposals through rhetorical connections and disconnections. Finally, we discuss how this analysis hints at the importance of processes of powering and puzzling that lead to particular framings towards the public at different historical junctures

  4. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  5. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  6. OBSERVATIONS OF BINARY STARS WITH THE DIFFERENTIAL SPECKLE SURVEY INSTRUMENT. I. INSTRUMENT DESCRIPTION AND FIRST RESULTS

    International Nuclear Information System (INIS)

    Horch, Elliott P.; Veillette, Daniel R.; Shah, Sagar C.; O'Rielly, Grant V.; Baena Galle, Roberto; Van Altena, William F.

    2009-01-01

    First results of a new speckle imaging system, the Differential Speckle Survey Instrument, are reported. The instrument is designed to take speckle data in two filters simultaneously with two independent CCD imagers. This feature results in three advantages over other speckle cameras: (1) twice as many frames can be obtained in the same observation time which can increase the signal-to-noise ratio for astrometric measurements, (2) component colors can be derived from a single observation, and (3) the two colors give substantial leverage over atmospheric dispersion, allowing for subdiffraction-limited separations to be measured reliably. Fifty-four observations are reported from the first use of the instrument at the Wisconsin-Indiana-Yale-NOAO 3.5 m Telescope 9 The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatories. in 2008 September, including seven components resolved for the first time. These observations are used to judge the basic capabilities of the instrument.

  7. Final Report for LDRD Project 02-FS-009 Gigapixel Surveillance Camera

    Energy Technology Data Exchange (ETDEWEB)

    Marrs, R E; Bennett, C L

    2010-04-20

    The threats of terrorism and proliferation of weapons of mass destruction add urgency to the development of new techniques for surveillance and intelligence collection. For example, the United States faces a serious and growing threat from adversaries who locate key facilities underground, hide them within other facilities, or otherwise conceal their location and function. Reconnaissance photographs are one of the most important tools for uncovering the capabilities of adversaries. However, current imaging technology provides only infrequent static images of a large area, or occasional video of a small area. We are attempting to add a new dimension to reconnaissance by introducing a capability for large area video surveillance. This capability would enable tracking of all vehicle movements within a very large area. The goal of our project is the development of a gigapixel video surveillance camera for high altitude aircraft or balloon platforms. From very high altitude platforms (20-40 km altitude) it would be possible to track every moving vehicle within an area of roughly 100 km x 100 km, about the size of the San Francisco Bay region, with a gigapixel camera. Reliable tracking of vehicles requires a ground sampling distance (GSD) of 0.5 to 1 m and a framing rate of approximately two frames per second (fps). For a 100 km x 100 km area the corresponding pixel count is 10 gigapixels for a 1-m GSD and 40 gigapixels for a 0.5-m GSD. This is an order of magnitude beyond the 1 gigapixel camera envisioned in our LDRD proposal. We have determined that an instrument of this capacity is feasible.

  8. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  9. Long wavelength infrared camera (LWIRC): a 10 micron camera for the Keck Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Wishnow, E.H.; Danchi, W.C.; Tuthill, P.; Wurtz, R.; Jernigan, J.G.; Arens, J.F.

    1998-05-01

    The Long Wavelength Infrared Camera (LWIRC) is a facility instrument for the Keck Observatory designed to operate at the f/25 forward Cassegrain focus of the Keck I telescope. The camera operates over the wavelength band 7-13 {micro}m using ZnSe transmissive optics. A set of filters, a circular variable filter (CVF), and a mid-infrared polarizer are available, as are three plate scales: 0.05``, 0.10``, 0.21`` per pixel. The camera focal plane array and optics are cooled using liquid helium. The system has been refurbished with a 128 x 128 pixel Si:As detector array. The electronics readout system used to clock the array is compatible with both the hardware and software of the other Keck infrared instruments NIRC and LWS. A new pre-amplifier/A-D converter has been designed and constructed which decreases greatly the system susceptibility to noise.

  10. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  11. Frame on frames: an annotated bibliography

    International Nuclear Information System (INIS)

    Wright, T.; Tsao, H.J.

    1983-01-01

    The success or failure of any sample survey of a finite population is largely dependent upon the condition and adequacy of the list or frame from which the probability sample is selected. Much of the published survey sampling related work has focused on the measurement of sampling errors and, more recently, on nonsampling errors to a lesser extent. Recent studies on data quality for various types of data collection systems have revealed that the extent of the nonsampling errors far exceeds that of the sampling errors in many cases. While much of this nonsampling error, which is difficult to measure, can be attributed to poor frames, relatively little effort or theoretical work has focused on this contribution to total error. The objective of this paper is to present an annotated bibliography on frames with the hope that it will bring together, for experimenters, a number of suggestions for action when sampling from imperfect frames and that more attention will be given to this area of survey methods research

  12. Construction Cluster Volume I [Wood Structural Framing].

    Science.gov (United States)

    Pennsylvania State Dept. of Justice, Harrisburg. Bureau of Correction.

    The document is the first of a series, to be integrated with a G.E.D. program, containing instructional materials at the basic skills level for the construction cluster. It focuses on wood structural framing and contains 20 units: (1) occupational information; (2) blueprint reading; (3) using leveling instruments and laying out building lines; (4)…

  13. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  14. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  15. A television/still camera with common optical system for reactor inspection

    International Nuclear Information System (INIS)

    Hughes, G.; McBane, P.

    1976-01-01

    One of the problems of reactor inspection is to obtain permanent high quality records. Video recordings provide a record of poor quality but known content. Still cameras can be used but the frame content is not predictable. Efforts have been made to combine T.V. viewing to align a still camera but a simple combination does not provide the same frame size. The necessity to preset the still camera controls severely restricts the flexibility of operation. A camera has, therefore, been designed which allows a search operation using the T.V. system. When an anomaly is found the still camera controls can be remotely set, an exact record obtained and the search operation continued without removal from the reactor. An application of this camera in the environment of the blanket gas region above the sodium region in PFR at 150 0 C is described

  16. Modern frame structure buildings

    Directory of Open Access Journals (Sweden)

    В. М. Першаков

    2013-07-01

    Full Text Available The article deals with the design, construction and implementation of reinforced concrete frame structures with span 18, 21 m for agricultural production buildings, hall-premises of public buildings and buildings of agricultural aviation. Structures are prefabricated frame buildings and have such advantages as large space inside the structure and lower cost compared with other facilities with same purpose

  17. Multimodal news framing effects

    NARCIS (Netherlands)

    Powell, T.E.

    2017-01-01

    Visuals in news media play a vital role in framing citizens’ political preferences. Yet, compared to the written word, visual images are undervalued in political communication research. Using framing theory, this thesis redresses the balance by studying the combined, or multimodal, effects of visual

  18. The Frame Game

    Science.gov (United States)

    Edwards, Michael Todd; Cox, Dana C.

    2011-01-01

    In this article, the authors explore framing, a non-multiplicative technique commonly employed by students as they construct similar shapes. When students frame, they add (or subtract) a "border" of fixed width about a geometric object. Although the approach does not yield similar shapes in general, the mathematical underpinnings of…

  19. Traditional timber frames

    NARCIS (Netherlands)

    Jorissen, A.J.M.; Hamer, den J.; Leijten, A.J.M.; Salenikovich, A.

    2014-01-01

    Due to new possibilities traditional timber framing has become increasingly popular since the beginning of the 21e century. Although traditional timber framing has been used for centuries, the expected mechanical behaviour is not dealt with in great detail in building codes, guidelines or text

  20. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  1. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  2. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  3. Optimal primitive reference frames

    International Nuclear Information System (INIS)

    Jennings, David

    2011-01-01

    We consider the smallest possible directional reference frames allowed and determine the best one can ever do in preserving quantum information in various scenarios. We find that for the preservation of a single spin state, two orthogonal spins are optimal primitive reference frames; and in a product state, they do approximately 22% as well as an infinite-sized classical frame. By adding a small amount of entanglement to the reference frame, this can be raised to 2(2/3) 5 =26%. Under the different criterion of entanglement preservation, a very similar optimal reference frame is found; however, this time it is for spins aligned at an optimal angle of 87 deg. In this case 24% of the negativity is preserved. The classical limit is considered numerically, and indicates under the criterion of entanglement preservation, that 90 deg. is selected out nonmonotonically, with a peak optimal angle of 96.5 deg. for L=3 spins.

  4. Dragging of inertial frames.

    Science.gov (United States)

    Ciufolini, Ignazio

    2007-09-06

    The origin of inertia has intrigued scientists and philosophers for centuries. Inertial frames of reference permeate our daily life. The inertial and centrifugal forces, such as the pull and push that we feel when our vehicle accelerates, brakes and turns, arise because of changes in velocity relative to uniformly moving inertial frames. A classical interpretation ascribed these forces to acceleration relative to some absolute frame independent of the cosmological matter, whereas an opposite view related them to acceleration relative to all the masses and 'fixed stars' in the Universe. An echo and partial realization of the latter idea can be found in Einstein's general theory of relativity, which predicts that a spinning mass will 'drag' inertial frames along with it. Here I review the recent measurements of frame dragging using satellites orbiting Earth.

  5. Analysis of gait using a treadmill and a Time-of-flight camera

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2009-01-01

    We present a system that analyzes human gait using a treadmill and a Time-of-flight camera. The camera provides spatial data with local intensity measures of the scene, and data are collected over several gait cycles. These data are then used to model and analyze the gait. For each frame...

  6. Inspecting rapidly moving surfaces for small defects using CNN cameras

    Science.gov (United States)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  7. Nanosecond framing photography for laser-produced interstreaming plasmas

    International Nuclear Information System (INIS)

    McLean, E.A.; Ripin, B.H.; Stamper, J.A.; Manka, C.K.; Peyser, T.A.

    1988-01-01

    Using a fast-gated (120 psec-5 nsec) microchannel-plate optical camera (gated optical imager), framing photographs have been taken of the rapidly streaming laser plasma (∼ 5 x 10 7 cm/sec) passing through a vacuum or a background gas, with and without a magnetic field. Observations of Large-Larmor-Radius Interchange Instabilities are presented

  8. Framing effects over time: comparing affective and cognitive news frames

    NARCIS (Netherlands)

    Lecheler, S.; Matthes, J.

    2012-01-01

    A growing number of scholars examine the duration of framing effects. However, duration is likely to differ from frame to frame, depending on how strong a frame is. This strength is likely to be enhanced by adding emotional components to a frame. By means of an experimental survey design (n = 111),

  9. Framing Gangnam Style

    Directory of Open Access Journals (Sweden)

    Hyunsun Catherine Yoon

    2017-08-01

    Full Text Available This paper examines the way in which news about Gangnam Style was framed in the Korean press. First released on 15th July 2012, it became the first video to pass two billion views on YouTube. 400 news articles between July 2012 and March 2013 from two South Korean newspapers - Chosun Ilbo and Hankyoreh were analyzed using the frame analysis method in five categories: industry/economy, globalization, cultural interest, criticism, and competition. The right-left opinion cleavage is important because news frames interact with official discourses, audience frames and prior knowledge which consequently mediate effects on public opinion, policy debates, social movement and individual interpretations. Whilst the existing literature on Gangnam Style took rather holistic approach, this study aimed to fill the lacuna, considering this phenomenon as a dynamic process, by segmenting different stages - recognition, spread, peak and continuation. Both newspapers acknowledged Gangnam Style was an epochal event but their perspectives and news frames were different; globalization frame was most frequently used in Chosun Ilbo whereas cultural interest frame was most often used in Hankyoreh. Although more critical approaches were found in Hankyoreh, reflecting the right-left opinion cleavage, both papers lacked in critical appraisal and analysis of Gangnam Style’s reception in a broader context of the new Korean Wave.

  10. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  11. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  12. Single-frame 3D human pose recovery from multiple views

    NARCIS (Netherlands)

    Hofmann, M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body pose from multi-camera single-frame views. Pose recovery starts with a shape detection stage where candidate poses are generated based on hierarchical exemplar matching in the individual camera views. The hierarchy used in

  13. Framing in criminal investigation

    Science.gov (United States)

    2016-01-01

    Failures in criminal investigation may lead to wrongful convictions. Insight in the criminal investigation process is needed to understand how these investigative failures may rise and how measures can contribute to the prevention of this kind of failures. Some of the main findings of an empirical study of the criminal investigation process in four cases of major investigations are presented here. This criminal investigation process is analyzed as a process of framing, using Goffman's framing (Goffman, 1975) and interaction theories (Goffman, 1990). It shows that in addition to framing, other substantive and social factors affect the criminal investigation. PMID:29046594

  14. Stroboscope Based Synchronization of Full Frame CCD Sensors

    Directory of Open Access Journals (Sweden)

    Liang Shen

    2017-04-01

    Full Text Available The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equipment, the proposed approach greatly reduces the cost by using inexpensive CCD cameras and one stroboscope. The results show that our method could reach a high accuracy much better than the frame-level synchronization of traditional software methods.

  15. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  16. Camera Traps Can Be Heard and Seen by Animals

    Science.gov (United States)

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  17. Camera traps can be heard and seen by animals.

    Directory of Open Access Journals (Sweden)

    Paul D Meek

    Full Text Available Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5 and infrared illumination outputs (n = 7 of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21 and assessed the vision ranges (n = 3 of mammals species (where data existed to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  18. Optical flow estimation on image sequences with differently exposed frames

    Science.gov (United States)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  19. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  20. Integration of USB and firewire cameras in machine vision applications

    Science.gov (United States)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  1. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  2. Global Vertical Reference Frame

    Czech Academy of Sciences Publication Activity Database

    Burša, Milan; Kenyon, S.; Kouba, J.; Šíma, Zdislav; Vatrt, V.; Vojtíšková, M.

    2004-01-01

    Roč. 33, - (2004), s. 404-407 ISSN 1436-3445 Institutional research plan: CEZ:AV0Z1003909 Keywords : geopotential WO * vertical systems * global vertical frame Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  3. On transforms between Gabor frames and wavelet frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Goh, Say Song

    2013-01-01

    We describe a procedure that enables us to construct dual pairs of wavelet frames from certain dual pairs of Gabor frames. Applying the construction to Gabor frames generated by appropriate exponential Bsplines gives wavelet frames generated by functions whose Fourier transforms are compactly...... supported splines with geometrically distributed knot sequences. There is also a reverse transform, which yields pairs of dual Gabor frames when applied to certain wavelet frames....

  4. Frames for undergraduates

    CERN Document Server

    Han, Deguang; Larson, David; Weber, Eric

    2007-01-01

    Frames for Undergraduates is an undergraduate-level introduction to the theory of frames in a Hilbert space. This book can serve as a text for a special-topics course in frame theory, but it could also be used to teach a second semester of linear algebra, using frames as an application of the theoretical concepts. It can also provide a complete and helpful resource for students doing undergraduate research projects using frames. The early chapters contain the topics from linear algebra that students need to know in order to read the rest of the book. The later chapters are devoted to advanced topics, which allow students with more experience to study more intricate types of frames. Toward that end, a Student Presentation section gives detailed proofs of fairly technical results with the intention that a student could work out these proofs independently and prepare a presentation to a class or research group. The authors have also presented some stories in the Anecdotes section about how this material has moti...

  5. Construction of a frameless camera-based stereotactic neuronavigator.

    Science.gov (United States)

    Cornejo, A; Algorri, M E

    2004-01-01

    We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. We present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications.

  6. Use of a color CMOS camera as a colorimeter

    Science.gov (United States)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  7. ARNICA, the Arcetri near-infrared camera: Astronomical performance assessment.

    Science.gov (United States)

    Hunt, L. K.; Lisi, F.; Testi, L.; Baffa, C.; Borelli, S.; Maiolino, R.; Moriondo, G.; Stanga, R. M.

    1996-01-01

    The Arcetri near-infrared camera ARNICA was built as a users' instrument for the Infrared Telescope at Gornergrat (TIRGO), and is based on a 256x256 NICMOS 3 detector. In this paper, we discuss ARNICA's optical and astronomical performance at the TIRGO and at the William Herschel Telescope on La Palma. Optical performance is evaluated in terms of plate scale, distortion, point spread function, and ghosting. Astronomical performance is characterized by camera efficiency, sensitivity, and spatial uniformity of the photometry.

  8. Vertically Integrated Edgeless Photon Imaging Camera

    Energy Technology Data Exchange (ETDEWEB)

    Fahim, Farah [Fermilab; Deptuch, Grzegorz [Fermilab; Shenai, Alpana [Fermilab; Maj, Piotr [AGH-UST, Cracow; Kmon, Piotr [AGH-UST, Cracow; Grybos, Pawel [AGH-UST, Cracow; Szczygiel, Robert [AGH-UST, Cracow; Siddons, D. Peter [Brookhaven; Rumaiz, Abdul [Brookhaven; Kuczewski, Anthony [Brookhaven; Mead, Joseph [Brookhaven; Bradford, Rebecca [Argonne; Weizeorick, John [Argonne

    2017-01-01

    The Vertically Integrated Photon Imaging Chip - Large, (VIPIC-L), is a large area, small pixel (65μm), 3D integrated, photon counting ASIC with zero-suppressed or full frame dead-time-less data readout. It features data throughput of 14.4 Gbps per chip with a full frame readout speed of 56kframes/s in the imaging mode. VIPIC-L contain 192 x 192 pixel array and the total size of the chip is 1.248cm x 1.248cm with only a 5μm periphery. It contains about 120M transistors. A 1.3M pixel camera module will be developed by arranging a 6 x 6 array of 3D VIPIC-L’s bonded to a large area silicon sensor on the analog side and to a readout board on the digital side. The readout board hosts a bank of FPGA’s, one per VIPIC-L to allow processing of up to 0.7 Tbps of raw data produced by the camera.

  9. Demo : an embedded vision system for high frame rate visual servoing

    NARCIS (Netherlands)

    Ye, Z.; He, Y.; Pieters, R.S.; Mesman, B.; Corporaal, H.; Jonker, P.P.

    2011-01-01

    The frame rate of commercial off-the-shelf industrial cameras is breaking the threshold of 1000 frames-per-second, the sample rate required in high performance motion control systems. On the one hand, it enables computer vision as a cost-effective feedback source; On the other hand, it imposes

  10. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  11. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.

  12. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  13. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  14. Weaving Hilbert space fusion frames

    OpenAIRE

    Neyshaburi, Fahimeh Arabyani; Arefijamaal, Ali Akbar

    2018-01-01

    A new notion in frame theory, so called weaving frames has been recently introduced to deal with some problems in signal processing and wireless sensor networks. Also, fusion frames are an important extension of frames, used in many areas especially for wireless sensor networks. In this paper, we survey the notion of weaving Hilbert space fusion frames. This concept can be had potential applications in wireless sensor networks which require distributed processing using different fusion frames...

  15. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  16. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  17. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  18. IEEE 1394 CAMERA IMAGING SYSTEM FOR BROOKHAVENS BOOSTER APPLICATION FACILITY BEAM DIAGNOSTICS

    International Nuclear Information System (INIS)

    BROWN, K.A.; FRAK, B.; GASSNER, D.; HOFF, L.; OLSEN, R.H.; SATOGATA, T.; TEPIKIAN, S.

    2002-01-01

    Brookhaven's Booster Applications Facility (BAF) will deliver resonant extracted heavy ion beams from the AGS Booster to short-exposure fixed-target experiments located at the end of the BAF beam line. The facility is designed to deliver a wide range of heavy ion species over a range of intensities from 10 3 to over 10 8 ions/pulse, and over a range of energies from 0.1 to 3.0 GeV/nucleon. With these constraints we have designed instrumentation packages which can deliver the maximum amount of dynamic range at a reasonable cost. Through the use of high quality optics systems and neutral density light filters we will achieve 4 to 5 orders of magnitude in light collection. By using digital IEEE1394 camera systems we are able to eliminate the frame-grabber stage in processing and directly transfer data at maximum rates of 400 Mb/set. In this note we give a detailed description of the system design and discuss the parameters used to develop the system specifications. We will also discuss the IEEE1394 camera software interface and the high-level user interface

  19. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  20. The AOTF-Based NO2 Camera

    Science.gov (United States)

    Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.

    2017-12-01

    In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.

  1. Operator representations of frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Hasannasab, Marzieh

    2017-01-01

    of the properties of the operator T requires more work. For example it is a delicate issue to obtain a representation with a bounded operator, and the availability of such a representation not only depends on the frame considered as a set, but also on the chosen indexing. Using results from operator theory we show......The purpose of this paper is to consider representations of frames {fk}k∈I in a Hilbert space ℋ of the form {fk}k∈I = {Tkf0}k∈I for a linear operator T; here the index set I is either ℤ or ℒ0. While a representation of this form is available under weak conditions on the frame, the analysis...... that by embedding the Hilbert space ℋ into a larger Hilbert space, we can always represent a frame via iterations of a bounded operator, composed with the orthogonal projection onto ℋ. The paper closes with a discussion of an open problem concerning representations of Gabor frames via iterations of a bounded...

  2. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  3. Framing Vision: An Examination of Framing, Sensegiving, and Sensemaking during a Change Initiative

    Science.gov (United States)

    Hamilton, William

    2016-01-01

    The purpose of this short article is to review the findings from an instrumental case study that examines how a college president used what this article refers to as "frame alignment processes" to mobilize internal and external support for a college initiative--one that achieved success under the current president. Specifically, I…

  4. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  5. Framing (implicitly) matters

    DEFF Research Database (Denmark)

    Anderson, Joel; Antalikova, Radka

    2014-01-01

    Denmark is currently experiencing the highest immigration rate in its modern history. Population surveys indicate that negative public attitudes toward immigrants actually stem from attitudes toward their (perceived) Islamic affiliation. We used a framing paradigm to investigate the explicit...... and implicit attitudes of Christian and Atheist Danes toward targets framed as Muslims or as immigrants. The results showed that explicit and implicit attitudes were more negative when the target was framed as a Muslim, rather than as an immigrant. Interestingly, implicit attitudes were qualified...... by the participants’ religion. Specifically, analyses revealed that Christians demonstrated more negative implicit attitudes toward immigrants than Muslims. Conversely, Atheists demonstrated more negative implicit attitudes toward Muslims than Atheists. These results suggest a complex relationship between religion...

  6. ``Frames of Reference'' revisited

    Science.gov (United States)

    Steyn-Ross, Alistair; Ivey, Donald G.

    1992-12-01

    The PSSC teaching film, ``Frames of Reference,'' was made in 1960, and was one of the first audio-visual attempts at showing how your physical ``point of view,'' or frame of reference, necessarily alters both your perceptions and your observations of motion. The gentle humor and original demonstrations made a lasting impact on many audiences, and with its recent re-release as part of the AAPT Cinema Classics videodisc it is timely that we should review both the message and the methods of the film. An annotated script and photographs from the film are presented, followed by extension material on rotating frames which teachers may find appropriate for use in their classrooms: constructions, demonstrations, an example, and theory.

  7. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    International Nuclear Information System (INIS)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-01-01

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10 6 frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs

  8. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Kil-Byoung; Bellan, Paul M. [Applied Physics, Caltech, 1200 E. California Boulevard, Pasadena, California 91125 (United States)

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  9. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  10. What's in a Frame?

    DEFF Research Database (Denmark)

    Holmgreen, Lise-Lotte

    Maintaining a good image and reputation in the eyes of stakeholders is vital to the organisation. Thus, in its corporate communication and discourse the organisation will seek to present or frame itself as favourably as possible while observing regulations stipulating accuracy and precision...... an organisation, and hence in shaping the image projected to the public. Framing is here understood as the selection of ‘some aspects of perceived reality … [making] them more salient in the communication text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation...

  11. Thinking inside the frame

    DEFF Research Database (Denmark)

    Knudsen, Sanne

    2017-01-01

    directed at the humanities. The purpose of this study is to argue the case for further research of public understanding of the humanities and to take a first step in that direction by presenting a study of the framing of the humanities in Danish print news media. Different framings of the humanities......The humanities, the natural and social sciences all represent advanced and systematic knowledge production—and they all receive public funding for doing so. However, although the field of public understanding of science has been well established for decades, similar research attention has not been...

  12. Framing financial culture

    DEFF Research Database (Denmark)

    Just, Sine Nørholm; Mouton, Nicolaas T.O.

    2014-01-01

    between competing frames leads to the conclusion that this political “blame game” is related to struggles over how to define the scandal, how to conceptualize its causes, and policy recommendations. Banks may have lost the battle of “Liborgate,” but the war over the meaning of financial culture is far...... from over. Originality/value – The paper is theoretically and methodologically original in its combination of the theories of framing and stasis, and it provides analytical insights into how sense is made of financial culture in the wake of the financial crisis....

  13. Timber frame walls

    DEFF Research Database (Denmark)

    Hansen, Ernst Jan de Place; Brandt, Erik

    2010-01-01

    A ventilated cavity is usually considered good practice for removing moisture behind the cladding of timber framed walls. Timber frame walls with no cavity are a logical alternative as they are slimmer and less expensive to produce and besides the risk of a two-sided fire behind the cladding....... It was found that the specific damages made to the vapour barrier as part of the test did not have any provable effect on the moisture content. In general elements with an intact vapour barrier did not show a critical moisture content at the wind barrier after four years of exposure....

  14. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  15. Frame scaling function sets and frame wavelet sets in Rd

    International Nuclear Information System (INIS)

    Liu Zhanwei; Hu Guoen; Wu Guochang

    2009-01-01

    In this paper, we classify frame wavelet sets and frame scaling function sets in higher dimensions. Firstly, we obtain a necessary condition for a set to be the frame wavelet sets. Then, we present a necessary and sufficient condition for a set to be a frame scaling function set. We give a property of frame scaling function sets, too. Some corresponding examples are given to prove our theory in each section.

  16. Design and Construction of an X-ray Lightning Camera

    Science.gov (United States)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  17. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  18. Frames and extension problems I

    DEFF Research Database (Denmark)

    Christensen, Ole

    2014-01-01

    In this article we present a short survey of frame theory in Hilbert spaces. We discuss Gabor frames and wavelet frames and set the stage for a discussion of various extension principles; this will be presented in the article Frames and extension problems II (joint with H.O. Kim and R.Y. Kim)....

  19. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  20. Sparse Matrices in Frame Theory

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Krahmer, Felix; Kutyniok, Gitta

    2014-01-01

    Frame theory is closely intertwined with signal processing through a canon of methodologies for the analysis of signals using (redundant) linear measurements. The canonical dual frame associated with a frame provides a means for reconstruction by a least squares approach, but other dual frames...... yield alternative reconstruction procedures. The novel paradigm of sparsity has recently entered the area of frame theory in various ways. Of those different sparsity perspectives, we will focus on the situations where frames and (not necessarily canonical) dual frames can be written as sparse matrices...

  1. The Cage; Framing Dad

    DEFF Research Database (Denmark)

    Kau, Edvin

    2012-01-01

    Unfolding his story very gradually and arousing the viewer’s curiosity, Sitaru invites the audience to investigate the parents’ and the boy’s mutual positions in their small flat, as well as the various layers of their conversations, through such means as framing, editing style, and the use...

  2. Quantum frames of reference

    International Nuclear Information System (INIS)

    Kaufherr, T.

    1981-01-01

    The idea that only relative variables have physical meaning came to be known as Mach's principle. Carrying over this idea to quantum theory, has led to the consideration of finite mass, macroscopic reference frames, relative to which all physical quantities are measured. During the process of measurement, a finite mass observer receives a kickback, and this reaction of the measuring device is not negligible in quantum theory because of the quantization of the action. Hence, the observer himself has to be included in the system that is being considered. Using this as the starting point, a number of thought experiments involving finite mass observers is discussed which have quantum uncertainties in their time or in their position. These thought experiments serve to elucidate in a qualitative way some of the difficulties involved, as well as pointing out a direction to take in seeking solutions to them. When the discussion is extended to include more than one observer, the question of the covariance of the theory immediately arises. Because none of the frames of reference should be preferred, the theory should be covariant. This demand expresses an equivalence principle which here is extended to include reference frames which are in quantum uncertainties relative to each other. Formulating the problem in terms of canonical variables, the ensueing free Hamiltonian contains vector and scalar potentials which represent the kick that the reference frame receives during measurement. These are essentially gravitational type potentials, resulting, as it were, from the extension of the equivalence principle into the quantum domain

  3. Framing the Oscars live

    DEFF Research Database (Denmark)

    Haastrup, Helle Kannik

    2016-01-01

    How is the global media event of the Oscars localised through the talk show on Danish television? How are both the mediated film star and the special brand of Hollywood celebrity culture addressed by the cultural intermediaries in the Danish framing? These are the questions I propose to answer...

  4. Framing ‘fracking’

    NARCIS (Netherlands)

    Williams, Laurence; Macnaghten, Philip; Davies, Richard; Curtis, Sarah

    2017-01-01

    The prospect of fracking in the United Kingdom has been accompanied by significant public unease. We outline how the policy debate is being framed by UK institutional actors, finding evidence of a dominant discourse in which the policy approach is defined through a deficit model of public

  5. Framing Canadian federalism

    National Research Council Canada - National Science Library

    Saywell, John; Anastakis, Dimitry; Bryden, Penny E

    2009-01-01

    ... the pervasive effects that federalism has on Canadian politics, economics, culture, and history, and provide a detailed framework in which to understand contemporary federalism. Written in honour of John T. Saywell's half-century of accomplished and influential scholarly work and teaching, Framing Canadian Federalism is a timely and fitting t...

  6. Global Vertical Reference Frame

    Czech Academy of Sciences Publication Activity Database

    Burša, Milan; Kenyon, S.; Kouba, J.; Šíma, Zdislav; Vatrt, V.; Vojtíšková, M.

    -, č. 5 (2009), s. 53-63 ISSN 1801-8483 R&D Projects: GA ČR GA205/08/0328 Institutional research plan: CEZ:AV0Z10030501 Keywords : sea surface topography * satellite altimetry * vertical frames Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  7. Institutional Justification in Frames

    DEFF Research Database (Denmark)

    Baden, Christian; Schultz, Friederike

    consensus. It extents research on framing in mass communication by applying institutional theory and Boltanski and Thévenot’s (2006) theory on justification in order to explain how the success and failure of proposed interpretations depend on the mobilization of accepted social institutions to justify...

  8. High-frame-rate digital radiographic videography

    Science.gov (United States)

    King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott

    1994-10-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  9. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  10. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  11. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Bell, P; Griffith, R; Hagans, K; Lerche, R; Allen, C; Davies, T; Janson, F; Justin, R; Marshall, B; Sweningsen, O

    2004-01-01

    The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied

  12. PC-AT to gamma camera interface ANUGAMI-S

    International Nuclear Information System (INIS)

    Bhattacharya, Sadhana; Gopalakrishnan, K.R.

    1997-01-01

    PC-AT to gamma camera interface is an image acquisition system used in nuclear medicine centres and hospitals. The interface hardware and acquisition software have been designed and developed to meet most of the routine clinical applications using gamma camera. The state of the art design of the interface provides quality improvement in addition to image acquisition, by applying on-line uniformity correction which is very essential for gamma camera applications in nuclear medicine. The improvement in the quality of the image has been achieved by image acquisition in positionally varying and sliding energy window. It supports all acquisition modes viz. static, dynamic and gated acquisition modes with and without uniformity correction. The user interface provides the acquisition in various user selectable frame sizes, orientation and colour palettes. A complete emulation of camera console has been provided along with persistence scope and acquisition parameter display. It is a universal system which provides a modern, cost effective and easily maintainable solution for interfacing any gamma camera to PC or upgradation of analog gamma camera. (author). 4 refs., 3 figs

  13. Analyzing Gait Using a Time-of-Flight Camera

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2009-01-01

    An algorithm is created, which performs human gait analysis using spatial data and amplitude images from a Time-of-flight camera. For each frame in a sequence the camera supplies cartesian coordinates in space for every pixel. By using an articulated model the subject pose is estimated in the depth...... map in each frame. The pose estimation is based on likelihood, contrast in the amplitude image, smoothness and a shape prior used to solve a Markov random field. Based on the pose estimates, and the prior that movement is locally smooth, a sequential model is created, and a gait analysis is done...... on this model. The output data are: Speed, Cadence (steps per minute), Step length, Stride length (stride being two consecutive steps also known as a gait cycle), and Range of motion (angles of joints). The created system produces good output data of the described output parameters and requires no user...

  14. Some relationship between G-frames and frames

    Directory of Open Access Journals (Sweden)

    Mehdi Rashidi-Kouchi

    2015-06-01

    Full Text Available In this paper we proved that every g-Riesz basis for Hilbert space $H$ with respect to $K$ by adding a condition is a Riesz basis for Hilbert $B(K$-module $B(H,K$. This is an extension of [A. Askarizadeh,M. A. Dehghan, {em G-frames as special frames}, Turk. J. Math., 35, (2011 1-11]. Also, we derived similar results for g-orthonormal and orthogonal bases. Some relationships between dual frame, dual g-frame and exact frame and exact g-frame are presented too.

  15. Wavelet frames and their duals

    DEFF Research Database (Denmark)

    Lemvig, Jakob

    2008-01-01

    frames with good time localization and other attractive properties. Furthermore, the dual wavelet frames are constructed in such a way that we are guaranteed that both frames will have the same desirable features. The construction procedure works for any real, expansive dilation. A quasi-affine system....... The signals are then represented by linear combinations of the building blocks with coefficients found by an associated frame, called a dual frame. A wavelet frame is a frame where the building blocks are stretched (dilated) and translated versions of a single function; such a frame is said to have wavelet...... structure. The dilation of the wavelet building blocks in higher dimension is done via a square matrix which is usually taken to be integer valued. In this thesis we step away from the "usual" integer, expansive dilation and consider more general, expansive dilations. In most applications of wavelet frames...

  16. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  17. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  18. Work and Inertial Frames

    Science.gov (United States)

    Kaufman, Richard

    2017-12-01

    A fairly recent paper resolves a large discrepancy in the internal energy utilized to fire a cannon as calculated by two inertial observers. Earth and its small reaction velocity must be considered in the system so that the change in kinetic energy is calculated correctly. This paper uses a car in a similar scenario, but considers the work done by forces acting over distances. An analysis of the system must include all energy interactions, including the work done on the car and especially the (negative) work done on Earth in a moving reference frame. This shows the importance of considering the force on Earth and the distance Earth travels. For calculation of work in inertial reference frames, the center of mass perspective is shown to be useful. We also consider the energy requirements to efficiently accelerate a mass among interacting masses.

  19. FishFrame

    DEFF Research Database (Denmark)

    Degel, Henrik; Jansen, Teunis

    2006-01-01

    . Development and test of software modules can be done once and reused by all. The biggest challenge in this is not technical – it is in organisation, coordination and trust. This challenge has been addressed by FishFrame - a web-based datawarehouse application. The “bottom-up” approach with maximum involvement...... of end users from as many labs and user groups as possible has been rather slow but quite successful in building international trust and cooperation around the system. This is mandatory prerequisites when our primary goal is not the programming project itself, but the creation of a tool that adds real...... value to users and in the end improves the way we work with our data. FishFrame version 4.2 is presented and the lessons learned from the process are discussed....

  20. Framing Light Rail Projects

    DEFF Research Database (Denmark)

    Olesen, Mette

    2014-01-01

    In Europe, there has been a strong political will to implement light rail. This article contributes to the knowledge concerning policies around light rail by analysing how local actors frame light rail projects and which rationalities and arguments are present in this decision-making process....... The article draws on the socio-technical approach to mobilities studies in order to reassemble the decision-making process in three European cases: Bergen, Angers, and Bern. This article provides insights into the political, discursive and material production of light rail mobilities in a European context....... It identifies the planning rationales behind the systems and the policies that have been supportive of this light rail vision. Finally, the article identifies the practical challenges and potentials that have been connected to the different local frames of light rail mobility which can be used in future...

  1. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  2. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  3. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  4. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  5. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  6. Framing a Bank

    DEFF Research Database (Denmark)

    Holmgreen, Lise-Lotte

    2012-01-01

    Danish bank, Danske Bank, during the 2008 financial crisis and hence in shaping its image projected to the public. Through the study of a number of semantic frames adopted by the Danish print press and those adopted by the Bank, this article will argue for the constructions of the press putting...... considerable strain on the Bank and its image, leading it to reconsider its previous strategy of denial of responsibility...

  7. Optical loop framing

    International Nuclear Information System (INIS)

    Kalibjian, R.; Chong, Y.P.; Prono, D.S.; Cavagnolo, H.R.

    1984-06-01

    The ATA provides an electron beam pulse of 70-ns duration at a 1-Hz rate. Our present optical diagnostics technique involve the imaging of the visible light generated by the beam incident onto the plant of a thin sheet of material. It has already been demonstrated that the light generated has a sufficiently fast temporal reponse in performing beam diagnostics. Notwithstanding possible beam emittance degradation due to scattering in the thin sheet, the observation of beam spatial profiles with relatively high efficiencies has provided data complementary to that obtained from beam wall current monitors and from various x-ray probes and other electrical probes. The optical image sensor consists of a gated, intensified television system. The gate pulse of the image intensifier can be appropriately delayed to give frames that are time-positioned from the head to the tail of the beam with a minimum gate time of 5-ns. The spatial correlation of the time frames from pulse to pulse is very good for a stable electron beam; however, when instabilities do occur, it is difficult to properly assess the spatial composition of the head and the tail of the beam on a pulse-to-pulse basis. Multiple gating within a pulse duration becomes desirable but cannot be performed because the recycle time (20-ms) of the TV system is much longer than the beam pulse. For this reason we have developed an optical-loop framing technique that will allow the recording of two frames within one pulse duration with our present gated/intensified TV system

  8. Density of Gabor Frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Heil, C.; Deng, B.

    1997-01-01

    A Gabor system is a set of time-frequency shifts$S(g,\\Lambda) = \\{e^{2\\pi i b x} g(x-a)\\}_{(a,b) \\in \\Lambda}$of a function $g \\in L^2({\\bold R}^d)$.We prove that if a finite union of Gabor systems$\\bigcup_{k=1}^r S(g_k,\\Lambda_k)$, with arbitrary sequences $\\Lambda_k$,forms a frame for $L^2({\\bo...

  9. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  10. Pseudo-set framing.

    Science.gov (United States)

    Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I

    2017-10-01

    Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Wide-field time-correlated single photon counting (TCSPC) microscopy with time resolution below the frame exposure time

    Energy Technology Data Exchange (ETDEWEB)

    Hirvonen, Liisa M. [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom); Petrášek, Zdeněk [Max Planck Institute of Biochemistry, Department of Cellular and Molecular Biophysics, Am Klopferspitz 18, D-82152 Martinsried (Germany); Suhling, Klaus, E-mail: klaus.suhling@kcl.ac.uk [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2015-07-01

    Fast frame rate CMOS cameras in combination with photon counting intensifiers can be used for fluorescence imaging with single photon sensitivity at kHz frame rates. We show here how the phosphor decay of the image intensifier can be exploited for accurate timing of photon arrival well below the camera exposure time. This is achieved by taking ratios of the intensity of the photon events in two subsequent frames, and effectively allows wide-field TCSPC. This technique was used for measuring decays of ruthenium compound Ru(dpp) with lifetimes as low as 1 μs with 18.5 μs frame exposure time, including in living HeLa cells, using around 0.1 μW excitation power. We speculate that by using an image intensifier with a faster phosphor decay to match a higher camera frame rate, photon arrival time measurements on the nanosecond time scale could well be possible.

  12. Taking it all in : special camera films in 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2006-07-15

    Details of a 360-degree digital camera designed by Immersive Media Telemmersion were presented. The camera has been employed extensively in the United States for homeland security and intelligence-gathering purposes. In Canada, the cameras are now being used by the oil and gas industry. The camera has 11 lenses pointing in all directions and generates high resolution movies that can be analyzed frame-by-frame from every angle. Global positioning satellite data can be gathered during filming so that operators can pinpoint any location. The 11 video streams use more than 100 million pixels per second. After filming, the system displays synchronized, high-resolution video streams, capturing a full motion spherical world complete with directional sound. It can be viewed on a computer monitor, video screen, or head-mounted display. Pembina Pipeline Corporation recently used the Telemmersion system to plot a proposed pipeline route between Alberta's Athabasca region and Edmonton. It was estimated that more than $50,000 was saved by using the camera. The resulting video has been viewed by Pembina's engineering, environmental and geotechnical groups who were able to accurately note the route's river crossings. The cameras were also used to estimate timber salvage. Footage was then given to the operations group, to help staff familiarize themselves with the terrain, the proposed route's right-of-way, and the number of water crossings and access points. Oil and gas operators have also used the equipment on a recently acquired block of land to select well sites. 4 figs.

  13. Attribute Framing and Goal Framing Effects in Health Decisions.

    Science.gov (United States)

    Krishnamurthy, Parthasarathy; Carter, Patrick; Blair, Edward

    2001-07-01

    Levin, Schneider, and Gaeth (LSG, 1998) have distinguished among three types of framing-risky choice, attribute, and goal framing-to reconcile conflicting findings in the literature. In the research reported here, we focus on attribute and goal framing. LSG propose that positive frames should be more effective than negative frames in the context of attribute framing, and negative frames should be more effective than positive frames in the context of goal framing. We test this framework by manipulating frame valence (positive vs negative) and frame type (attribute vs goal) in a unified context with common procedures. We also argue that the nature of effects in a goal-framing context may depend on the extent to which the research topic has "intrinsic self-relevance" to the population. In the context of medical decision making, we operationalize low intrinsic self-relevance by using student subjects and high intrinsic self-relevance by using patients. As expected, we find complete support for the LSG framework under low intrinsic self-relevance and modified support for the LSG framework under high intrinsic self-relevance. Overall, our research appears to confirm and extend the LSG framework. Copyright 2001 Academic Press.

  14. Exploring CEO’s Leadership Frames and E-Commerce Adoption among Bruneian SMEs

    Directory of Open Access Journals (Sweden)

    Afzaal H. Seyal

    2012-04-01

    Full Text Available The study examines the 250 CEOs’ leadership style in adoption of electronic commerce (EC among Bruneian SMEs. The study uses Bolman and Deals’ instrument to measure the leadership frames and found that majority (70% of the leadersare practicing all four frames and considered as effective leaders. Both human and symbolic (paired frames of leadership remains dominant.In addition, structural, human resource and symbolic frames are ranked highest among the multiple (three frames used. However, paired leadership frames (human and symbolic were found to be significan't predictor of EC adoption among Bruneian SMEs. Based upon the analysis and conclusion some recommendations were made for the relevant authorities.

  15. Universal crystal cooling device for precession cameras, rotation cameras and diffractometers

    International Nuclear Information System (INIS)

    Hajdu, J.; McLaughlin, P.J.; Helliwell, J.R.; Sheldon, J.; Thompson, A.W.

    1985-01-01

    A versatile crystal cooling device is described for macromolecular crystallographic applications in the 290 to 80 K temperature range. It utilizes a fluctuation-free cold-nitrogen-gas supply, an insulated Mylar crystal cooling chamber and a universal ball joint, which connects the cooling chamber to the goniometer head and the crystal. The ball joint is a novel feature over all previous designs. As a result, the device can be used on various rotation cameras, precession cameras and diffractometers. The lubrication of the interconnecting parts with graphite allows the cooling chamber to remain stationary while the crystal and goniometer rotate. The construction allows for 360 0 rotation of the crystal around the goniometer axis and permits any settings on the arcs and slides of the goniometer head (even if working at 80 K). There are no blind regions associated with the frame holding the chamber. Alternatively, the interconnecting ball joint can be tightened and fixed. This results in a set up similar to the construction described by Bartunik and Schubert where the cooling chamber rotates with the crystal. The flexibility of the systems allows for the use of the device on most cameras or diffractometers. THis device has been installed at the protein crystallographic stations of the Synchrotron Radiation Source at Daresbury Laboratory and in the Laboratory of Molecular Biophysics, Oxford. Several data sets have been collected with processing statistics typical of data collected without a cooling chamber. Tests using the full white beam of the synchrotron also look promising. (orig./BHO)

  16. Conformal frame dependence of inflation

    International Nuclear Information System (INIS)

    Domènech, Guillem; Sasaki, Misao

    2015-01-01

    Physical equivalence between different conformal frames in scalar-tensor theory of gravity is a known fact. However, assuming that matter minimally couples to the metric of a particular frame, which we call the matter Jordan frame, the matter point of view of the universe may vary from frame to frame. Thus, there is a clear distinction between gravitational sector (curvature and scalar field) and matter sector. In this paper, focusing on a simple power-law inflation model in the Einstein frame, two examples are considered; a super-inflationary and a bouncing universe Jordan frames. Then we consider a spectator curvaton minimally coupled to a Jordan frame, and compute its contribution to the curvature perturbation power spectrum. In these specific examples, we find a blue tilt at short scales for the super-inflationary case, and a blue tilt at large scales for the bouncing case

  17. MedlinePlus FAQ: Framing

    Science.gov (United States)

    ... URL of this page: https://medlineplus.gov/faq/framing.html I'd like to link to MedlinePlus, ... M. encyclopedia. Our license agreements do not permit framing of their content from our site. For more ...

  18. Conformal frame dependence of inflation

    Energy Technology Data Exchange (ETDEWEB)

    Domènech, Guillem; Sasaki, Misao, E-mail: guillem.domenech@yukawa.kyoto-u.ac.jp, E-mail: misao@yukawa.kyoto-u.ac.jp [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2015-04-01

    Physical equivalence between different conformal frames in scalar-tensor theory of gravity is a known fact. However, assuming that matter minimally couples to the metric of a particular frame, which we call the matter Jordan frame, the matter point of view of the universe may vary from frame to frame. Thus, there is a clear distinction between gravitational sector (curvature and scalar field) and matter sector. In this paper, focusing on a simple power-law inflation model in the Einstein frame, two examples are considered; a super-inflationary and a bouncing universe Jordan frames. Then we consider a spectator curvaton minimally coupled to a Jordan frame, and compute its contribution to the curvature perturbation power spectrum. In these specific examples, we find a blue tilt at short scales for the super-inflationary case, and a blue tilt at large scales for the bouncing case.

  19. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  20. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  1. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  2. Frames in super Hilbert modules

    Directory of Open Access Journals (Sweden)

    Mehdi Rashidi-Kouchi

    2018-01-01

    Full Text Available In this paper, we define super Hilbert module and investigate frames in this space. Super Hilbert modules are  generalization of super Hilbert spaces in Hilbert C*-module setting. Also, we define frames in a super Hilbert module and characterize them by using of the concept of g-frames in a Hilbert C*-module. Finally, disjoint frames in Hilbert C*-modules are introduced and investigated.

  3. Measurement of the timing behaviour of off-the-shelf cameras

    Science.gov (United States)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  4. New avenues for framing research

    NARCIS (Netherlands)

    de Vreese, C.H.

    2012-01-01

    In this article, the author reviews the studies in this special issue of the American Behavioral Scientist. It is a strong collection of articles reporting findings from an integrated project that looks at frame building, frames, and effects of frames. The project is part of an exciting large-scale

  5. VIOLENT FRAMES IN ACTION

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; McGrath, Liam R.; Whitney, Paul D.

    2011-11-17

    We present a computational approach to radical rhetoric that leverages the co-expression of rhetoric and action features in discourse to identify violent intent. The approach combines text mining and machine learning techniques with insights from Frame Analysis and theories that explain the emergence of violence in terms of moral disengagement, the violation of sacred values and social isolation in order to build computational models that identify messages from terrorist sources and estimate their proximity to an attack. We discuss a specific application of this approach to a body of documents from and about radical and terrorist groups in the Middle East and present the results achieved.

  6. Continuous Shearlet Tight Frames

    KAUST Repository

    Grohs, Philipp

    2010-10-22

    Based on the shearlet transform we present a general construction of continuous tight frames for L2(ℝ2) from any sufficiently smooth function with anisotropic moments. This includes for example compactly supported systems, piecewise polynomial systems, or both. From our earlier results in Grohs (Technical report, KAUST, 2009) it follows that these systems enjoy the same desirable approximation properties for directional data as the previous bandlimited and very specific constructions due to Kutyniok and Labate (Trans. Am. Math. Soc. 361:2719-2754, 2009). We also show that the representation formulas we derive are in a sense optimal for the shearlet transform. © 2010 Springer Science+Business Media, LLC.

  7. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  8. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  9. Radioisotope instruments

    CERN Document Server

    Cameron, J F; Silverleaf, D J

    1971-01-01

    International Series of Monographs in Nuclear Energy, Volume 107: Radioisotope Instruments, Part 1 focuses on the design and applications of instruments based on the radiation released by radioactive substances. The book first offers information on the physical basis of radioisotope instruments; technical and economic advantages of radioisotope instruments; and radiation hazard. The manuscript then discusses commercial radioisotope instruments, including radiation sources and detectors, computing and control units, and measuring heads. The text describes the applications of radioisotop

  10. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  11. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    Energy Technology Data Exchange (ETDEWEB)

    Roos, E. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nebeker, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-08

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF on this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.

  12. on Goal Framing

    Directory of Open Access Journals (Sweden)

    Eulàlia P. Abril

    2014-01-01

    Full Text Available En respuesta a la enorme y algunas veces conceptualmente inconsistente literatura sobre valence framing,Levin y sus colegas (1998 desarrollaron una tipología de encuadre de valencia que organiza los diferentesresultados a partir de elección arriesgada, atributo, y encuadre de los resultados (goal framing. Este estudiofavorece la literatura sobre encuadre de los resultados mediante (a su aplicación en el contexto de una cuestiónsocial como la pobreza infantil extrema; y (b el examen de los mecanismos afectivos sobre el cual el encuadrede los resultados es de eficacia persuasiva. Los resultados experimentales (N = 197 mostraron que la exposiciónal mensaje de encuadre de pérdida permitió un apoyo mayor hacia las políticas públicas que buscan erradicar lapobreza infantil, en comparación con el mensaje de encuadre de ganancia. Los resultados también revelaronque el afecto negativo sirve como herramienta mediadora de apoyo hacia las políticas públicas. Estos hallazgossugieren que, en el contexto del apoyo social hacia la población pobre, la capacidad de persuasión dentro delencuadre de pérdida se facilita cuando los participantes experimentan afectos negativos.

  13. Balinese Frame of Reference

    Directory of Open Access Journals (Sweden)

    I Nyoman Aryawibawa

    2016-04-01

    Full Text Available Abstract: Balinese Frame of Reference. Wassmann and Dasen (1998 did a study on the acquisition of Balinese frames of reference. They pointed out that, in addition to the dominant use of absolute system, the use of relative system was also observed. This article aims at verifying Wassmann and Dasen’ study. Employing monolingual Balinese speakers and using linguistic and non-linguistic tasks, Aryawibawa (2010, 2012, 2015 showed that Balinese subjects used an absolute system dominantly in responding the two tasks, e.g. The man is north/south/east/west of the car. Unlike Wassmann and Dasen’s results, no relative system was used by the subjects in solving the tasks. Instead of the relative system, an intrinsic system was also observed in this study, even though it was unfrequent. The article concludes that the absolute system was dominantly employed by Balinese speakers in describing spatial relations in Balinese. The use of the system seems to affect their cognitive functions.

  14. Cognitive framing in action.

    Science.gov (United States)

    Huhn, John M; Potts, Cory Adam; Rosenbaum, David A

    2016-06-01

    Cognitive framing effects have been widely reported in higher-level decision-making and have been ascribed to rules of thumb for quick thinking. No such demonstrations have been reported for physical action, as far as we know, but they would be expected if cognition for physical action is fundamentally similar to cognition for higher-level decision-making. To test for such effects, we asked participants to reach for a horizontally-oriented pipe to move it from one height to another while turning the pipe 180° to bring one end (the "business end") to a target on the left or right. From a physical perspective, participants could have always rotated the pipe in the same angular direction no matter which end was the business end; a given participant could have always turned the pipe clockwise or counter-clockwise. Instead, our participants turned the business end counter-clockwise for left targets and clockwise for right targets. Thus, the way the identical physical task was framed altered the way it was performed. This finding is consistent with the hypothesis that cognition for physical action is fundamentally similar to cognition for higher-level decision-making. A tantalizing possibility is that higher-level decision heuristics have roots in the control of physical action, a hypothesis that accords with embodied views of cognition. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Mapping in inertial frames

    International Nuclear Information System (INIS)

    Arunasalam, V.

    1989-05-01

    World space mapping in inertial frames is used to examine the Lorentz covariance of symmetry operations. It is found that the Galilean invariant concepts of simultaneity (S), parity (P), and time reversal symmetry (T) are not Lorentz covariant concepts for inertial observers. That is, just as the concept of simultaneity has no significance independent of the Lorentz inertial frame, likewise so are the concepts of parity and time reversal. However, the world parity (W) [i.e., the space-time reversal symmetry (P-T)] is a truly Lorentz covariant concept. Indeed, it is shown that only those mapping matrices M that commute with the Lorentz transformation matrix L (i.e., [M,L] = 0) are the ones that correspond to manifestly Lorentz covariant operations. This result is in accordance with the spirit of the world space Mach's principle. Since the Lorentz transformation is an orthogonal transformation while the Galilean transformation is not an orthogonal transformation, the formal relativistic space-time mapping theory used here does not have a corresponding non-relativistic counterpart. 12 refs

  16. System for whole body imaging and count profiling with a scintillation camera

    International Nuclear Information System (INIS)

    Kaplan, E.; Cooke, M.B.D.

    1976-01-01

    The present invention relates to a method of and apparatus for the radionuclide imaging of the whole body of a patient using an unmodified scintillation camera which permits a patient to be continuously moved under or over the stationary camera face along one axis at a time, parallel passes being made to increase the dimension of the other axis. The system includes a unique electrical circuit which makes it possible to digitally generate new matrix coordinates by summing the coordinates of a first fixed reference frame and the coordinates of a second moving reference frame. 19 claims, 7 figures

  17. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrilla, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis

  18. X-Ray Powder Diffraction with Guinier - Haegg Focusing Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Allan

    1970-12-15

    The Guinier - Haegg focusing camera is discussed with reference to its use as an instrument for rapid phase analysis. An actual camera and the alignment procedure employed in its setting up are described. The results obtained with the instrument are compared with those obtained with Debye - Scherrer cameras and powder diffractometers. Exposure times of 15 - 30 minutes with compounds of simple structure are roughly one-sixth of those required for Debye - Scherrer patterns. Coupled with the lower background resulting from the use of a monochromatic X-ray beam, the shorter exposure time gives a ten-fold increase in sensitivity for the detection of minor phases as compared with the Debye - Scherrer camera. Attention is paid to the precautions taken to obtain reliable Bragg angles from Guinier - Haegg film measurements, with particular reference to calibration procedures. The evaluation of unit cell parameters from Guinier - Haegg data is discussed together with the application of tests for the presence of angle-dependent systematic errors. It is concluded that with proper calibration procedures and least squares treatment of the data, accuracies of the order of 0.005% are attainable. A compilation of diffraction data for a number of compounds examined in the Active Central Laboratory at Studsvik is presented to exemplify the scope of this type of powder camera.

  19. X-Ray Powder Diffraction with Guinier - Haegg Focusing Cameras

    International Nuclear Information System (INIS)

    Brown, Allan

    1970-12-01

    The Guinier - Haegg focusing camera is discussed with reference to its use as an instrument for rapid phase analysis. An actual camera and the alignment procedure employed in its setting up are described. The results obtained with the instrument are compared with those obtained with Debye - Scherrer cameras and powder diffractometers. Exposure times of 15 - 30 minutes with compounds of simple structure are roughly one-sixth of those required for Debye - Scherrer patterns. Coupled with the lower background resulting from the use of a monochromatic X-ray beam, the shorter exposure time gives a ten-fold increase in sensitivity for the detection of minor phases as compared with the Debye - Scherrer camera. Attention is paid to the precautions taken to obtain reliable Bragg angles from Guinier - Haegg film measurements, with particular reference to calibration procedures. The evaluation of unit cell parameters from Guinier - Haegg data is discussed together with the application of tests for the presence of angle-dependent systematic errors. It is concluded that with proper calibration procedures and least squares treatment of the data, accuracies of the order of 0.005% are attainable. A compilation of diffraction data for a number of compounds examined in the Active Central Laboratory at Studsvik is presented to exemplify the scope of this type of powder camera

  20. Identifying issue frames in text.

    Directory of Open Access Journals (Sweden)

    Eyal Sagi

    Full Text Available Framing, the effect of context on cognitive processes, is a prominent topic of research in psychology and public opinion research. Research on framing has traditionally relied on controlled experiments and manually annotated document collections. In this paper we present a method that allows for quantifying the relative strengths of competing linguistic frames based on corpus analysis. This method requires little human intervention and can therefore be efficiently applied to large bodies of text. We demonstrate its effectiveness by tracking changes in the framing of terror over time and comparing the framing of abortion by Democrats and Republicans in the U.S.

  1. Making Molecular Movies: 10,000,000,000,000 Frames per Second

    International Nuclear Information System (INIS)

    Gaffney, Kelly

    2006-01-01

    Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second. SLAC has embarked on the construction of just such a camera. Please join me as I discuss how this molecular movie camera will work and how it will change our perception of the molecular world.

  2. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  3. Message framing and perinatal decisions.

    Science.gov (United States)

    Haward, Marlyse F; Murphy, Ryan O; Lorenz, John M

    2008-07-01

    The purpose of this study was to explore the effect of information framing on parental decisions about resuscitation of extremely premature infants. Secondary outcomes focused on elucidating the impact of other variables on treatment choices and determining whether those effects would take precedence over any framing effects. This confidential survey study was administered to adult volunteers via the Internet. The surveys depicted a hypothetical vignette of a threatened delivery at gestational age of 23 weeks, with prognostic outcome information framed as either survival with lack of disability (positive frame) or chance of dying and likelihood of disability among survivors (negative frame). Participants were randomly assigned to receive either the positively or negatively framed vignette. They were then asked to choose whether they would prefer resuscitation or comfort care. After completing the survey vignette, participants were directed to a questionnaire designed to test the secondary hypothesis and to explore possible factors associated with treatment decisions. A total of 146 subjects received prognostic information framed as survival data and 146 subjects received prognostic information framed as mortality data. Overall, 24% of the sample population chose comfort care and 76% chose resuscitation. A strong trend was detected toward a framing effect on treatment preference; respondents for whom prognosis was framed as survival data were more likely to elect resuscitation. This framing effect was significant in a multivariate analysis controlling for religiousness, parental status, and beliefs regarding the sanctity of life. Of these covariates, only religiousness modified susceptibility to framing; participants who were not highly religious were significantly more likely to be influenced to opt for resuscitation by the positive frame than were participants who were highly religious. Framing bias may compromise efforts to approach prenatal counseling in a

  4. Underwater television camera for monitoring inner side of pressure vessel

    International Nuclear Information System (INIS)

    Takayama, Kazuhiko.

    1997-01-01

    An underwater television support device equipped with a rotatable and vertically movable underwater television camera and an underwater television camera controlling device for monitoring images of the inside of the reactor core photographed by the underwater television camera to control the position of the underwater television camera and the underwater light are disposed on an upper lattice plate of a reactor pressure vessel. Both of them are electrically connected with each other by way of a cable to rapidly observe the inside of the reactor core by the underwater television camera. The reproducibility is extremely satisfactory by efficiently concentrating the position of the camera and image information upon inspection and observation. As a result, the steps for periodical inspection can be reduced to shorten the days for the periodical inspection. Since there is no requirement to withdraw fuel assemblies over a wide reactor core region, and the device can be used with the fuel assemblies being left as they are in the reactor, it is suitable for inspection of detectors for nuclear instrumentation. (N.H.)

  5. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  6. Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras

    OpenAIRE

    Liu , Yanli; Granier , Xavier

    2012-01-01

    International audience; In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key ide...

  7. ACCURACY ASSESSMENT OF GO PRO HERO 3 (BLACK CAMERA IN UNDERWATER ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    P. Helmholz,

    2016-06-01

    Full Text Available Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras have become available, which often cost less than $500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black. The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm and 0.0072 mm for 12MB (for an average c of 3.642mm. The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  8. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    Science.gov (United States)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  9. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  10. Framing the Manager

    DEFF Research Database (Denmark)

    Holmgreen, Lise-Lotte

    2013-01-01

    Genres are ways for organisations of discursively interacting with the surrounding world, with the aim of achieving specific disciplinary goals (Bhatia 2004). As such, the management job ad has the objective of finding the right candidate for the management job advertised (Norlyk 2006......). In this process, framing (Evans & Green 2006; Fillmore 1982; Kövecses 2006; Lakoff 1987, 1996) plays a salient role in conceptualising the profile and qualities of the preferred candidate, drawing on established cultural models of what constitutes the perfect leader. Thus, in a Danish setting we may talk of two...... in this realisation, the fact that one of the two models, the ‘goal-oriented motivator’ model, seems to be monopolising the genre raises a number of issues that need to be addressed: How is this model realised conceptually and linguistically? Why does this model continue to be the Danish business world’s preferred...

  11. Shield support frame. Schildausbaugestell

    Energy Technology Data Exchange (ETDEWEB)

    Plaga, K.

    1981-09-17

    A powered shield support frame for coal sheds is described comprising of two bottom sliding shoes, a large area gob shield and a larg area roof assembly, all joined movable together. The sliding shoes and the gob shield are joined by a lemniscate guide. Two hydraulic props are arranged at the face-side at one third of the length of the sliding shoes and at the goaf-side at one third of the length of the roof assembly. A nearly horizontal lying pushing prop unit joins the bottom wall sliding shoes to the goaf-side lemniscate guide. This assembly can be applied to seams with a thickness down to 45 cm. (OGR).

  12. Voz sobre frame relay

    OpenAIRE

    D´Elia, Gabriel Anibal

    2000-01-01

    Esta tesis trata el tema de VOFR, desde la digitalización de la voz hasta su transmisión a través de dicha red, así también como la comparación con otros medios de transporte como VOIP. Dada las características del protocolo frame relay y su disponibilidad se eligió como el medio más apropiado para la transmisión de voz y datos en forma integrada sobre una misma red. El trabajo comienza con una breve explicación de la voz, su digitalización y forma actual de transmisión a través de una red di...

  13. Riesz frames and approximation of the frame coefficients

    DEFF Research Database (Denmark)

    Casazza, P.; Christensen, Ole

    1998-01-01

    A frame is a fmaily {f i } i=1 ∞ of elements in a Hilbert space with the property that every element in can be written as a (infinite) linear combination of the frame elements. Frame theory describes how one can choose the corresponding coefficients, which are called frame coefficients. From...... the mathematical point of view this is gratifying, but for applications it is a problem that the calculation requires inversion of an operator on . The projection method is introduced in order to avoid this problem. The basic idea is to consider finite subfamilies {f i } i=1 n of the frame and the orthogonal...... projection Pn onto its span. For has a representation as a linear combination of fi, i=1,2,..., n and the corresponding coefficients can be calculated using finite dimensional methods. We find conditions implying that those coefficients converge to the correct frame coefficients as n→∞, in which case we have...

  14. Behaviour of Strengthened RC Frames with Eccentric Steel Braced Frames

    Directory of Open Access Journals (Sweden)

    Kamanli Mehmet

    2017-01-01

    Full Text Available After devastating earthquakes in recent years, strengthening of reinforced concrete buildings became an important research topic. Reinforced concrete buildings can be strengthened by steel braced frames. These steel braced frames may be made of concentrically or eccentrically indicated in Turkish Earthquake Code 2007. In this study pushover analysis of the 1/3 scaled 1 reinforced concrete frame and 1/3 scaled 4 strengthened reinforced concrete frames with internal eccentric steel braced frames were conducted by SAP2000 program. According to the results of the analyses conducted, load-displacement curves of the specimens were compared and evaluated. Adding eccentric steel braces to the bare frame decreased the story drift, and significantly increased strength, stiffness and energy dissipation capacity. In this strengthening method lateral load carrying capacity, stiffness and dissipated energy of the structure can be increased.

  15. Behaviour of Strengthened RC Frames with Eccentric Steel Braced Frames

    Science.gov (United States)

    Kamanli, Mehmet; Unal, Alptug

    2017-10-01

    After devastating earthquakes in recent years, strengthening of reinforced concrete buildings became an important research topic. Reinforced concrete buildings can be strengthened by steel braced frames. These steel braced frames may be made of concentrically or eccentrically indicated in Turkish Earthquake Code 2007. In this study pushover analysis of the 1/3 scaled 1 reinforced concrete frame and 1/3 scaled 4 strengthened reinforced concrete frames with internal eccentric steel braced frames were conducted by SAP2000 program. According to the results of the analyses conducted, load-displacement curves of the specimens were compared and evaluated. Adding eccentric steel braces to the bare frame decreased the story drift, and significantly increased strength, stiffness and energy dissipation capacity. In this strengthening method lateral load carrying capacity, stiffness and dissipated energy of the structure can be increased.

  16. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  17. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  18. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  19. Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Kovácsik, Ákos, E-mail: kovacsik.akos@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Lampert, Máté, E-mail: lampert.mate@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Un Nam, Yong, E-mail: yunam@nfri.re.kr [NFRI, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon 305-806 (Korea, Republic of); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2015-01-11

    A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

  20. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  1. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  2. Development of nuclear imaging instrument and software

    International Nuclear Information System (INIS)

    Kim, Jang Hee; Chung Jae Myung; Nam, Sang Won; Chang, Hyung Uk

    1999-03-01

    In the medical diagnosis, the nuclear medical instrument using the radioactive isotope are commonly utilized. In the foreign countries, the medical application and development of the most advanced nuclear medical instrument such as Single Photon Emission Computer Tomography (SPECT) and position emission tomograph (PET), have been extensively carried out. However, in Korea, such highly expensive instruments have been all, imported, paying foreign currency. Since 1997, much efforts, the development of the radio nuclide medical instrument, the drive of the domestic production, etc. have been made to establish our own technologies and to balance the international payments under the support of the Ministry of Science and Technology. At present time, 180 nuclear imaging instruments are now in operation and 60 of them are analog camera. In analog camera, the vector X-Y monitor is need for are image display. Since the analog camera, signal can not be process in the digital form, we have difficulties to transfer and store the image data. The image displayed at the monitor must be stored in the form of polaroid or X ray film. In order to discard these disadvantages, if we developed the computer interface system, the performance analog camera will be comparable with that of the digital camera. The final objective of the research is that using the interface system developed in this research, we reconstruct the image data transmitted to the personal computer in the form of the generalized data file

  3. Instrumental interaction

    OpenAIRE

    Luciani , Annie

    2007-01-01

    International audience; The expression instrumental interaction as been introduced by Claude Cadoz to identify a human-object interaction during which a human manipulates a physical object - an instrument - in order to perform a manual task. Classical examples of instrumental interaction are all the professional manual tasks: playing violin, cutting fabrics by hand, moulding a paste, etc.... Instrumental interaction differs from other types of interaction (called symbolic or iconic interactio...

  4. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  5. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  6. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  7. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  8. The Atacama Cosmology Telescope: Instrument

    Science.gov (United States)

    Thornton, Robert J.; Atacama Cosmology Telescope Team

    2010-01-01

    The 6-meter Atacama Cosmology Telescope (ACT) is making detailed maps of the Cosmic Microwave Background at Cerro Toco in northern Chile. In this talk, I focus on the design and operation of the telescope and its commissioning instrument, the Millimeter Bolometer Array Camera. The camera contains three independent sets of optics that operate at 148 GHz, 217 GHz, and 277 GHz with arcminute resolution, each of which couples to a 1024-element array of Transition Edge Sensor (TES) bolometers. I will report on the camera performance, including the beam patterns, optical efficiencies, and detector sensitivities. Under development for ACT is a new polarimeter based on feedhorn-coupled TES devices that have improved sensitivity and are planned to operate at 0.1 K.

  9. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  10. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  11. Six problems in frame theory

    DEFF Research Database (Denmark)

    Christensen, Ole

    2014-01-01

    We discuss various problems in frame theory that have been open for some years. A short discussion of frame theory is also provided, but it only contains the information that is necessary in order to understand the open problems and their role.......We discuss various problems in frame theory that have been open for some years. A short discussion of frame theory is also provided, but it only contains the information that is necessary in order to understand the open problems and their role....

  12. Pairs of dual periodic frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Goh, Say Song

    2012-01-01

    The time–frequency analysis of a signal is often performed via a series expansion arising from well-localized building blocks. Typically, the building blocks are based on frames having either Gabor or wavelet structure. In order to calculate the coefficients in the series expansion, a dual frame...... is needed. The purpose of the present paper is to provide constructions of dual pairs of frames in the setting of the Hilbert space of periodic functions L2(0,2π). The frames constructed are given explicitly as trigonometric polynomials, which allows for an efficient calculation of the coefficients...

  13. Frames, agency and institutional change

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Jensen, Per Langaa; Gottlieb, Stefan Christoffer

    2017-01-01

    This study examines change and the sources influencing the formulation and diffusion of policies in construction. The change examined is the introduction of a benchmarking policy initiative in the Danish construction industry. Using institutional theory with emphasis on the concepts of frames...... and framings, we show how strategically motivated actors are able to frame policy problems in ways that disclose the mixture of motives, interests and institutional mechanisms at play in change processes. In doing so, we contribute to the literature on the role of agency in institutional change and the framing...

  14. Optimizing Low Light Level Imaging Techniques and Sensor Design Parameters using CCD Digital Cameras for Potential NASA Earth Science Research aboard a Small Satellite or ISS

    Data.gov (United States)

    National Aeronautics and Space Administration — For this project, the potential of using state-of-the-art aerial digital framing cameras that have time delayed integration (TDI) to acquire useful low light level...

  15. column frame for design of reinforced concrete sway frames

    African Journals Online (AJOL)

    adminstrator

    design of slender reinforced concrete columns in sway frames according .... concrete,. Ac = gross cross-sectional area of the columns. Step 3: Effective Buckling Length Factors. The effective buckling length factors of columns in a sway frame shall be computed by .... shall have adequate resistance to failure in a sway mode ...

  16. Power to the frame: bringing sociology back to frame analysis

    NARCIS (Netherlands)

    Vliegenthart, R.; van Zoonen, L.

    2011-01-01

    This article critically reviews current frame and framing research in media and communication studies. It is first argued that most authors fail to distinguish between ‘frame’ and ‘framing’ and therewith produce a conceptual confusion and imprecision that is not conducive to the field. Second, it is

  17. Value Framing: A Prelude to Software Problem Framing

    NARCIS (Netherlands)

    Wieringa, Roelf J.; Gordijn, Jaap; van Eck, Pascal; Cox, K.; Hall, J.G.; Rapanotti, L.

    2004-01-01

    Software problem framing is a way to find specifications for software. Software problem frames can be used to structure the environment of a software system (the machine) and specify desired software properties in such a way that we can show that software with these properties will help achieve the

  18. New characterizations of fusion frames (frames of subspaces)

    Indian Academy of Sciences (India)

    Theory (College Park, MD, 2003) Contemp. Math. 345, Amer. Math. Soc. (RI: Provi- dence) (2004) 87–113. [4] Casazza P G and Kutyniok G, Robustness of Fusion Frames under Erasures of sub- spaces and of Local Frame Vectors, Radon transforms, geometry, and wavelets (LA: New Orleans) (2006) Contemp. Math., Amer.

  19. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  20. Body frames and frame singularities for three-atom systems

    International Nuclear Information System (INIS)

    Littlejohn, R.G.; Mitchell, K.A.; Aquilanti, V.; Cavalli, S.

    1998-01-01

    The subject of body frames and their singularities for three-particle systems is important not only for large-amplitude rovibrational coupling in molecular spectroscopy, but also for reactive scattering calculations. This paper presents a geometrical analysis of the meaning of body frame conventions and their singularities in three-particle systems. Special attention is devoted to the principal axis frame, a certain version of the Eckart frame, and the topological inevitability of frame singularities. The emphasis is on a geometrical picture, which is intended as a preliminary study for the more difficult case of four-particle systems, where one must work in higher-dimensional spaces. The analysis makes extensive use of kinematic rotations. copyright 1998 The American Physical Society

  1. Riesz Frames and Approximation of the Frame Coefficients

    DEFF Research Database (Denmark)

    Christensen, Ole

    1996-01-01

    A frame is a familyof elements in a Hilbert space with the propertythat every element in the Hilbert space can be written as a (infinite)linear combination of the frame elements. Frame theory describes howone can choose the corresponding coefficients, which are calledframe coefficients. From...... the mathematical point of view this isgratifying, but for applications it is a problem that the calculationrequires inversion of an operator on the Hilbert space.The projection method is introduced in order to avoid this problem.The basic idea is to consider finite subfamiliesof the frame and the orthogonal...... projection onto its span. Forfin QTR H,P_nf has a representation as a linear combinationof f_i,i=1,2,..,n, and the corresponding coefficients can be calculatedusing finite dimensional methods. We find conditions implying that thosecoefficients converge to the correct frame coefficients as n goes...

  2. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    International Nuclear Information System (INIS)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A.E.; Engelhardt, M.

    2005-01-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2x10 7 cm -2 s -1 , which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300x1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points

  3. Gamma camera based FDG PET in oncology

    International Nuclear Information System (INIS)

    Park, C. H.

    2002-01-01

    Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories; conventional (c) and gamma camera based ( CB ) PET. CB PET is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more CB PET in operation than cPET in the USA. CB PET is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Our was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The following is a brief description of our clinical experience of FDG CBPET in oncology

  4. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  5. Framing the frame: How task goals determine the likelihood and direction of framing effects

    OpenAIRE

    Todd McElroy; John J. Seta

    2007-01-01

    We examined how the goal of a decision task influences the perceived positive, negative valence of the alternatives and thereby the likelihood and direction of framing effects. In Study 1 we manipulated the goal to increase, decrease or maintain the commodity in question and found that when the goal of the task was to increase the commodity, a framing effect consistent with those typically observed in the literature was found. When the goal was to decrease, a framing effect opposite to the ty...

  6. Typical effects of laser dazzling CCD camera

    Science.gov (United States)

    Zhang, Zhen; Zhang, Jianmin; Shao, Bibo; Cheng, Deyan; Ye, Xisheng; Feng, Guobin

    2015-05-01

    In this article, an overview of laser dazzling effect to buried channel CCD camera is given. The CCDs are sorted into staring and scanning types. The former includes the frame transfer and interline transfer types. The latter includes linear and time delay integration types. All CCDs must perform four primary tasks in generating an image, which are called charge generation, charge collection, charge transfer and charge measurement. In camera, the lenses are needed to input the optical signal to the CCD sensors, in which the techniques for erasing stray light are used. And the electron circuits are needed to process the output signal of CCD, in which many electronic techniques are used. The dazzling effects are the conjunct result of light distribution distortion and charge distribution distortion, which respectively derive from the lens and the sensor. Strictly speaking, in lens, the light distribution is not distorted. In general, the lens are so well designed and fabricated that its stray light can be neglected. But the laser is of much enough intensity to make its stray light obvious. In CCD image sensors, laser can induce a so large electrons generation. Charges transfer inefficiency and charges blooming will cause the distortion of the charge distribution. Commonly, the largest signal outputted from CCD sensor is restricted by capability of the collection well of CCD, and can't go beyond the dynamic range for the subsequent electron circuits maintaining normal work. So the signal is not distorted in the post-processing circuits. But some techniques in the circuit can make some dazzling effects present different phenomenon in final image.

  7. Pseudo real-time coded aperture imaging system with intensified vidicon cameras

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.

    1977-01-01

    A coded image displayed on a TV monitor was used to directly reconstruct a decoded image. Both the coded and the decoded images were viewed with intensified vidicon cameras. The coded aperture was a 15-element nonredundant pinhole array. The coding and decoding were accomplished simultaneously during the scanning of a single 16-msec TV frame

  8. I Am Not a Camera: On Visual Politics and Method. A Response to Roy Germano

    NARCIS (Netherlands)

    Yanow, D.

    2014-01-01

    No observational method is "point and shoot." Even bracketing interpretive methodologies and their attendant philosophies, a researcher-including an experimentalist-always frames observation in terms of the topic of interest. I cannot ever be "just a camera lens," not as researcher and not as

  9. Establishing imaging sensor specifications for digital still cameras

    Science.gov (United States)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  10. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  11. [The framing effect: medical implications].

    Science.gov (United States)

    Mazzocco, Ketti; Cherubini, Paolo; Rumiati, Rino

    2005-01-01

    Over the last 20 years, many studies explored how the way information is presented modifies choices. This sort of effect, referred to as "framing effects", typically consists of the inversion of choices when presenting structurally identical decision problems in different ways. It is a common assumption that physicians are unaffected (or less affected) by the surface description of a decision problem, because they are formally trained in medical decision making. However, several studies showed that framing effects occur even in the medical field. The complexity and variability of these effects are remarkable, making it necessary to distinguish among different framing effects, depending on whether the effect is obtained by modifying adjectives (attribute framing), goals of a behavior (goal framing), or the probability of an outcome (risky choice framing). A further reason for the high variability of the framing effects seems to be the domain of the decision problem, with different effects occurring in prevention decisions, disease-detection decisions, and treatment decisions. The present work reviews the studies on framing effects, in order to summarize them and clarify their possible role in medical decision making.

  12. FRAME CATAGORIZATION OF CONVERSATIONAL INTIMACY

    OpenAIRE

    Lyubov Kit

    2017-01-01

    The article deals with the notion of intimacy. The frame of intimacy is studied on the basis of the linguistic parameters, analysis of text extracts and universal knowledge about intimacy. Frame analysis helped to establish the catagorization of types and nominators of intimate speech genres, their construction in static and dynamic aspects.

  13. Frame Catagorization of Conversational Intimacy

    OpenAIRE

    Lyubov Kit

    2017-01-01

    The article deals with the notion of intimacy. The frame of intimacy is studied on the basis of the linguistic parameters, analysis of text extracts and universal knowledge about intimacy. Frame analysis helped to establish the catagorization of types and nominators of intimate speech genres, their construction in static and dynamic aspects.

  14. A CCD camera probe for a superconducting cyclotron

    International Nuclear Information System (INIS)

    Marti, F.; Blue, R.; Kuchar, J.; Nolen, J.A.; Sherrill, B.; Yurkon, J.

    1991-01-01

    The traditional internal beam probes in cyclotrons have consisted of a differential element, a wire or thin strip, and a main probe with several fingers to determine the vertical distribution of the beam. The resolution of these probes is limited, especially in the vertical direction. The authors have developed a probe for their K1200 superconducting cyclotron based on a CCD TV camera that works in a 6 T magnetic field. The camera looks at the beam spot on a scintillating screen. The TV image is processed by a frame grabber that digitizes and displays the image in pseudocolor in real time. This probe has much better resolution than traditional probes. They can see beams with total currents as low as 0.1 pA, with position resolution of about 0.05 mm

  15. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  16. High spatial resolution infrared camera as ISS external experiment

    Science.gov (United States)

    Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan

    High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.

  17. Instrumentation Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Provides instrumentation support for flight tests of prototype weapons systems using a vast array of airborne sensors, transducers, signal conditioning and encoding...

  18. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  19. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  20. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  1. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, G., E-mail: giuliana.rizzo@pi.infn.it [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Batignani, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Benkechkache, M.A. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); University Constantine 1, Department of Electronics in the Science and Technology Faculty, I-25017, Constantine (Algeria); Bettarini, S.; Casarosa, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Comotti, D. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Dalla Betta, G.-F. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); TIFPA INFN, I-38123 Trento (Italy); Fabris, L. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); Forti, F. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Grassi, M.; Lodola, L.; Malcovati, P. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Manghisoni, M. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); and others

    2016-07-11

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10{sup 4} photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  2. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    International Nuclear Information System (INIS)

    Rizzo, G.; Batignani, G.; Benkechkache, M.A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G.-F.; Fabris, L.; Forti, F.; Grassi, M.; Lodola, L.; Malcovati, P.; Manghisoni, M.

    2016-01-01

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10"4 photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  3. Preflight Calibration Test Results for Optical Navigation Camera Telescope (ONC-T) Onboard the Hayabusa2 Spacecraft

    Science.gov (United States)

    Kameda, S.; Suzuki, H.; Takamatsu, T.; Cho, Y.; Yasuda, T.; Yamada, M.; Sawada, H.; Honda, R.; Morota, T.; Honda, C.; Sato, M.; Okumura, Y.; Shibasaki, K.; Ikezawa, S.; Sugita, S.

    2017-07-01

    The optical navigation camera telescope (ONC-T) is a telescopic framing camera with seven colors onboard the Hayabusa2 spacecraft launched on December 3, 2014. The main objectives of this instrument are to optically navigate the spacecraft to asteroid Ryugu and to conduct multi-band mapping the asteroid. We conducted performance tests of the instrument before its installation on the spacecraft. We evaluated the dark current and bias level, obtained data on the dependency of the dark current on the temperature of the charge-coupled device (CCD). The bias level depends strongly on the temperature of the electronics package but only weakly on the CCD temperature. The dark-reference data, which is obtained simultaneously with observation data, can be used for estimation of the dark current and bias level. A long front hood is used for ONC-T to reduce the stray light at the expense of flatness in the peripheral area of the field of view (FOV). The central area in FOV has a flat sensitivity, and the limb darkening has been measured with an integrating sphere. The ONC-T has a wheel with seven bandpass filters and a panchromatic glass window. We measured the spectral sensitivity using an integrating sphere and obtained the sensitivity of all the pixels. We also measured the point-spread function using a star simulator. Measurement results indicate that the full width at half maximum is less than two pixels for all the bandpass filters and in the temperature range expected in the mission phase except for short periods of time during touchdowns.

  4. Frames and outer frames for Hilbert C^*-modules

    OpenAIRE

    Arambašić, Ljiljana; Bakić, Damir

    2015-01-01

    The goal of the present paper is to extend the theory of frames for countably generated Hilbert $C^*$-modules over arbitrary $C^*$-algebras. In investigating the non-unital case we introduce the concept of outer frame as a sequence in the multiplier module $M(X)$ that has the standard frame property when applied to elements of the ambient module $X$. Given a Hilbert $\\A$-module $X$, we prove that there is a bijective correspondence of the set of all adjointable surjections from the generalize...

  5. Super-resolution processing for pulsed neutron imaging system using a high-speed camera

    International Nuclear Information System (INIS)

    Ishizuka, Ken; Kai, Tetsuya; Shinohara, Takenao; Segawa, Mariko; Mochiki, Koichi

    2015-01-01

    Super-resolution and center-of-gravity processing improve the resolution of neutron-transmitted images. These processing methods calculate the center-of-gravity pixel or sub-pixel of the neutron point converted into light by a scintillator. The conventional neutron-transmitted image is acquired using a high-speed camera by integrating many frames when a transmitted image with one frame is not provided. It succeeds in acquiring the transmitted image and calculating a spectrum by integrating frames of the same energy. However, because a high frame rate is required for neutron resonance absorption imaging, the number of pixels of the transmitted image decreases, and the resolution decreases to the limit of the camera performance. Therefore, we attempt to improve the resolution by integrating the frames after applying super-resolution or center-of-gravity processing. The processed results indicate that center-of-gravity processing can be effective in pulsed-neutron imaging with a high-speed camera. In addition, the results show that super-resolution processing is effective indirectly. A project to develop a real-time image data processing system has begun, and this system will be used at J-PARC in JAEA. (author)

  6. Development of a portable x-ray tv camera set

    International Nuclear Information System (INIS)

    Panityotai, J.

    1990-01-01

    A portable X-ray T V camera set was developed using a 24 V battery as a power supply unit. The development aims at a non-film X-radiographic technique with low exposure radiation. The machine is able to catch one X-radiographic frame at a time with a resolution of 256 X 256 pixels under 64 gray scales. The investigation shows a horizontal resolution of 0.6 lines per millimeter and a vertical resolution of 0.7 lines per mi/limiter

  7. Infrared Camera Diagnostic for Heat Flux Measurements on NSTX

    International Nuclear Information System (INIS)

    D. Mastrovito; R. Maingi; H.W. Kugel; A.L. Roquemore

    2003-01-01

    An infrared imaging system has been installed on NSTX (National Spherical Torus Experiment) at the Princeton Plasma Physics Laboratory to measure the surface temperatures on the lower divertor and center stack. The imaging system is based on an Indigo Alpha 160 x 128 microbolometer camera with 12 bits/pixel operating in the 7-13 (micro)m range with a 30 Hz frame rate and a dynamic temperature range of 0-700 degrees C. From these data and knowledge of graphite thermal properties, the heat flux is derived with a classic one-dimensional conduction model. Preliminary results of heat flux scaling are reported

  8. The Sydney University PAPA camera

    Science.gov (United States)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  9. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  10. On-Orbit Camera Misalignment Estimation Framework and Its Application to Earth Observation Satellite

    Directory of Open Access Journals (Sweden)

    Seungwoo Lee

    2015-03-01

    Full Text Available Despite the efforts for precise alignment of imaging sensors and attitude sensors before launch, the accuracy of pre-launch alignment is limited. The misalignment between attitude frame and camera frame is especially important as it is related to the localization error of the spacecraft, which is one of the essential factors of satellite image quality. In this paper, a framework for camera misalignment estimation is presented with its application to a high-resolution earth-observation satellite—Deimos-2. The framework intends to provide a solution for estimation and correction of the camera misalignment of a spacecraft, covering image acquisition planning to mathematical solution of camera misalignment. Considerations for effective image acquisition planning to obtain reliable results are discussed, followed by a detailed description on a practical method for extracting many GCPs automatically using reference ortho-photos. Patterns of localization errors that commonly occur due to the camera misalignment are also investigated. A mathematical model for camera misalignment estimation is described comprehensively. The results of simulation experiments showing the validity and accuracy of the misalignment estimation model are provided. The proposed framework was applied to Deimos-2. The real-world data and results from Deimos-2 are presented.

  11. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  12. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrance, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-01-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect the vacuum vessel internal structures in both the visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diam fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35-mm Nikon F3 still camera, or (5) a 16-mm Locam II movie camera with variable framing rate up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  13. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  14. Framed School--Frame Factors, Frames and the Dynamics of Social Interaction in School

    Science.gov (United States)

    Persson, Anders

    2015-01-01

    This paper aims to show how the Goffman frame perspective can be used in an analysis of school and education and how it can be combined, in such analysis, with the frame factor perspective. The latter emphasizes factors that are determined outside the teaching process, while the former stresses how actors organize their experiences and define…

  15. Analysis of dark current images of a CMOS camera during gamma irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Czifrus, Szabolcs, E-mail: czifrus@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Kocsis, Gábor, E-mail: kocsis.gabor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Szepesi, Tamás, E-mail: szepesi.tamas@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2013-12-15

    Highlights: • Radiation tolerance of a fast framing CMOS camera EDICAM examined. • We estimate the expected gamma dose and spectrum of EDICAM with MCNP. • We irradiate EDICAM by 23.5 Gy in 70 min in a fission rector. • Dose rate normalised average brightness of frames grows linearly with the dose. • Dose normalised average brightness of frames follows the dose rate time evolution. -- Abstract: We report on the behaviour of the dark current images of the Event Detection Intelligent Camera (EDICAM) when placed into an irradiation field of gamma rays. EDICAM is an intelligent fast framing CMOS camera operating in the visible spectral range, which is designed for the video diagnostic system of the Wendelstein 7-X (W7-X) stellarator. Monte Carlo calculations were carried out in order to estimate the expected gamma spectrum and dose for an entire year of operation in W7-X. EDICAM was irradiated in a pure gamma field in the Training Reactor of BME with a dose of approximately 23.5 Gy in 1.16 h. During the irradiation, numerous frame series were taken with the camera with exposure times 20 μs, 50 μs, 100 μs, 1 ms, 10 ms, 100 ms. EDICAM withstood the irradiation, but suffered some dynamic range degradation. The behaviour of the dark current images during irradiation is described in detail. We found that the average brightness of dark current images depends on the total ionising dose that the camera is exposed to and the dose rate as well as on the applied exposure times.

  16. serialising languages: satellite-framed, verb-framed or neither

    African Journals Online (AJOL)

    George Saad

    Figure 2: Verb-framed construction type (Slobin 2000: 109). 2 ... 2 An anonymous reviewer asks why we have replaced Talmy's conflation term “Ground” with ..... an S-language may predispose speakers to pay more linguistic attention to.

  17. Remote removal of an obstruction from FFTF [Fast Flux Test Facility] in-service inspection camera track

    International Nuclear Information System (INIS)

    Gibbons, P.W.

    1990-11-01

    Remote techniques and special equipment were used to clear the path of a closed-circuit television camera system that travels on a monorail track around the reactor vessel support arm structure. A tangle of wire-wrapped instrumentation tubing had been inadvertently inserted through a dislocated guide-tube expansion joint and into the camera track area. An externally driven auger device, mounted on the track ahead of the camera to view the procedure, was used to retrieve the tubing. 6 figs

  18. Serialising languages: Satellite-framed, verb-framed or neither ...

    African Journals Online (AJOL)

    The diversity in the coding of the core schema of motion, i.e., Path, has led to a traditional typology of languages into verb-framed and satellite-framed languages. In the former Path is encoded in verbs and in the latter it is encoded in non-verb elements that function as sisters to co-event expressing verbs such as manner ...

  19. Framing of health information messages.

    Science.gov (United States)

    Akl, Elie A; Oxman, Andrew D; Herrin, Jeph; Vist, Gunn E; Terrenato, Irene; Sperati, Francesca; Costiniuk, Cecilia; Blank, Diana; Schünemann, Holger

    2011-12-07

    The same information about the evidence on health effects can be framed either in positive words or in negative words. Some research suggests that positive versus negative framing can lead to different decisions, a phenomenon described as the framing effect. Attribute framing is the positive versus negative description of a specific attribute of a single item or a state, for example, "the chance of survival with cancer is 2/3" versus "the chance of mortality with cancer is 1/3". Goal framing is the description of the consequences of performing or not performing an act as a gain versus a loss, for example, "if you undergo a screening test for cancer, your survival will be prolonged" versus "if you don't undergo screening test for cancer, your survival will be shortened". To evaluate the effects of attribute (positive versus negative) framing and of goal (gain versus loss) framing of the same health information, on understanding, perception of effectiveness, persuasiveness, and behavior of health professionals, policy makers, and consumers. We searched the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, issue 3 2007), MEDLINE (Ovid) (1966 to October 2007), EMBASE (Ovid) (1980 to October 2007), PsycINFO (Ovid) (1887 to October 2007). There were no language restrictions. We reviewed the reference lists of related systematic reviews, included studies and of excluded but closely related studies. We also contacted experts in the field. We included randomized controlled trials, quasi-randomised controlled trials, and cross-over studies with health professionals, policy makers, and consumers evaluating one of the two types of framing. Two review authors extracted data in duplicate and independently. We graded the quality of evidence for each outcome using the GRADE approach. We standardized the outcome effects using standardized mean difference (SMD). We stratified the analysis by the type of framing (attribute, goal) and conducted pre

  20. STRAY DOG DETECTION IN WIRED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    C. Prashanth

    2013-08-01

    Full Text Available Existing surveillance systems impose high level of security on humans but lacks attention on animals. Stray dogs could be used as an alternative to humans to carry explosive material. It is therefore imperative to ensure the detection of stray dogs for necessary corrective action. In this paper, a novel composite approach to detect the presence of stray dogs is proposed. The captured frame from the surveillance camera is initially pre-processed using Gaussian filter to remove noise. The foreground object of interest is extracted utilizing ViBe algorithm. Histogram of Oriented Gradients (HOG algorithm is used as the shape descriptor which derives the shape and size information of the extracted foreground object. Finally, stray dogs are classified from humans using a polynomial Support Vector Machine (SVM of order 3. The proposed composite approach is simulated in MATLAB and OpenCV. Further it is validated with real time video feeds taken from an existing surveillance system. From the results obtained, it is found that a classification accuracy of about 96% is achieved. This encourages the utilization of the proposed composite algorithm in real time surveillance systems.

  1. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  2. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  3. Instrumentation development

    International Nuclear Information System (INIS)

    Ubbes, W.F.; Yow, J.L. Jr.

    1988-01-01

    Instrumentation is developed for the Civilian Radioactive Waste Management Program to meet several different (and sometimes conflicting) objectives. This paper addresses instrumentation development for data needs that are related either directly or indirectly to a repository site, but does not touch on instrumentation for work with waste forms or other materials. Consequently, this implies a relatively large scale for the measurements, and an in situ setting for instrument performance. In this context, instruments are needed for site characterization to define phenomena, develop models, and obtain parameter values, and for later design and performance confirmation testing in the constructed repository. The former set of applications is more immediate, and is driven by the needs of program design and performance assessment activities. A host of general technical and nontechnical issues have arisen to challenge instrumentation development. Instruments can be classed into geomechanical, geohydrologic, or other specialty categories, but these issues cut across artificial classifications. These issues are outlined. Despite this imposing list of issues, several case histories are cited to evaluate progress in the area

  4. Frames and counter-frames giving meaning to dementia: a framing analysis of media content.

    Science.gov (United States)

    Van Gorp, Baldwin; Vercruysse, Tom

    2012-04-01

    Media tend to reinforce the stigmatization of dementia as one of the most dreaded diseases in western society, which may have repercussions on the quality of life of those with the illness. The persons with dementia, but also those around them become imbued with the idea that life comes to an end as soon as the diagnosis is pronounced. The aim of this paper is to understand the dominant images related to dementia by means of an inductive framing analysis. The sample is composed of newspaper articles from six Belgian newspapers (2008-2010) and a convenience sample of popular images of the condition in movies, documentaries, literature and health care communications. The results demonstrate that the most dominant frame postulates that a human being is composed of two distinct parts: a material body and an immaterial mind. If this frame is used, the person with dementia ends up with no identity, which is in opposition to the Western ideals of personal self-fulfilment and individualism. For each dominant frame an alternative counter-frame is defined. It is concluded that the relative absence of counter-frames confirms the negative image of dementia. The inventory might be a help for caregivers and other professionals who want to evaluate their communication strategy. It is discussed that a more resolute use of counter-frames in communication about dementia might mitigate the stigma that surrounds dementia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Putting Safety in the Frame

    Directory of Open Access Journals (Sweden)

    Valerie Jean O’Keeffe

    2015-06-01

    Full Text Available Current patient safety policy focuses nursing on patient care goals, often overriding nurses’ safety. Without understanding how nurses construct work health and safety (WHS, patient and nurse safety cannot be reconciled. Using ethnography, we examine social contexts of safety, studying 72 nurses across five Australian hospitals making decisions during patient encounters. In enacting safe practice, nurses used “frames” built from their contextual experiences to guide their behavior. Frames are produced by nurses, and they structure how nurses make sense of their work. Using thematic analysis, we identify four frames that inform nurses’ decisions about WHS: (a communicating builds knowledge, (b experiencing situations guides decisions, (c adapting procedures streamlines work, and (d team working promotes safe working. Nurses’ frames question current policy and practice by challenging how nurses’ safety is positioned relative to patient safety. Recognizing these frames can assist the design and implementation of effective WHS management.

  6. Demonstration of the CDMA-mode CAOS smart camera.

    Science.gov (United States)

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  7. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Sven Fleck

    2006-12-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  8. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Fleck Sven

    2007-01-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  9. Recording of radiation-induced optical density changes in doped agarose gels with a CCD camera

    International Nuclear Information System (INIS)

    Tarte, B.J.; Jardine, P.A.; Van Doorn, T.

    1996-01-01

    Full text: Spatially resolved dose measurement with iron-doped agarose gels is continuing to be investigated for applications in radiotherapy dosimetry. It has previously been proposed to use optical methods, rather than MRI, for dose measurement with such gels and this has been investigated using a spectrophotometer (Appleby A and Leghrouz A, Med Phys, 18:309-312, 1991). We have previously studied the use of a pencil beam laser for such optical density measurement of gels and are currently investigating charge-coupled devices (CCD) camera imaging for the same purpose but with the advantages of higher data acquisition rates and potentially greater spatial resolution. The gels used in these studies were poured, irradiated and optically analysed in Perspex casts providing gel sections 1 cm thick and up to 20 cm x 30 cm in dimension. The gels were also infused with a metal indicator dye (xylenol orange) to render the radiation induced oxidation of the iron in the gel sensitive to optical radiation, specifically in the green spectral region. Data acquisition with the CCD camera involved illumination of the irradiated gel section with a diffuse white light source, with the light from the plane of the gel section focussed to the CCD array with a manual zoom lens. The light was also filtered with a green colour glass filter to maximise the contrast between unirradiated and irradiated gels. The CCD camera (EG and G Reticon MC4013) featured a 1024 x 1024 pixel array and was interfaced to a PC via a frame grabber acquisition board with 8 bit resolution. The performance of the gel dosimeter was appraised in mapping of physical and dynamic wedged 6 MV X-ray fields. The results from the CCD camera detection system were compared with both ionisation chamber data and laser based optical density measurements of the gels. Cross beam profiles were extracted from each measurement system at a particular depth (eg. 2.3 cm for the physical wedge field) for direct comparison. A

  10. Instrumental analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Jae; Seo, Seong Gyu

    1995-03-15

    This textbook deals with instrumental analysis, which consists of nine chapters. It has Introduction of analysis chemistry, the process of analysis and types and form of the analysis, Electrochemistry on basic theory, potentiometry and conductometry, electromagnetic radiant rays and optical components on introduction and application, Ultraviolet rays and Visible spectrophotometry, Atomic absorption spectrophotometry on introduction, flame emission spectrometry and plasma emission spectrometry. The others like infrared spectrophotometry, X-rays spectrophotometry and mass spectrometry, chromatography and the other instrumental analysis like radiochemistry.

  11. Instrumental analysis

    International Nuclear Information System (INIS)

    Kim, Seung Jae; Seo, Seong Gyu

    1995-03-01

    This textbook deals with instrumental analysis, which consists of nine chapters. It has Introduction of analysis chemistry, the process of analysis and types and form of the analysis, Electrochemistry on basic theory, potentiometry and conductometry, electromagnetic radiant rays and optical components on introduction and application, Ultraviolet rays and Visible spectrophotometry, Atomic absorption spectrophotometry on introduction, flame emission spectrometry and plasma emission spectrometry. The others like infrared spectrophotometry, X-rays spectrophotometry and mass spectrometry, chromatography and the other instrumental analysis like radiochemistry.

  12. LOFT instrumentation

    International Nuclear Information System (INIS)

    Bixby, W.W.

    1979-01-01

    A description of instrumentation used in the Loss-of-Fluid Test (LOFT) large break Loss-of-Coolant Experiments is presented. Emphasis is placed on hydraulic and thermal measurements in the primary system piping and components, reactor vessel, and pressure suppression system. In addition, instrumentation which is being considered for measurement of phenomena during future small break testing is discussed. (orig.) 891 HP/orig. 892 BRE [de

  13. Event Detection Intelligent Camera: Demonstration of flexible, real-time data taking and processing

    Energy Technology Data Exchange (ETDEWEB)

    Szabolics, Tamás, E-mail: szabolics.tamas@wigner.mta.hu; Cseh, Gábor; Kocsis, Gábor; Szepesi, Tamás; Zoletnik, Sándor

    2015-10-15

    Highlights: • We present EDICAM's operation principles description. • Firmware tests results. • Software test results. • Further developments. - Abstract: An innovative fast camera (EDICAM – Event Detection Intelligent CAMera) was developed by MTA Wigner RCP in the last few years. This new concept was designed for intelligent event driven processing to be able to detect predefined events and track objects in the plasma. The camera provides a moderate frame rate of 400 Hz at full frame resolution (1280 × 1024), and readout of smaller region of interests can be done in the 1–140 kHz range even during exposure of the full image. One of the most important advantages of this hardware is a 10 Gbit/s optical link which ensures very fast communication and data transfer between the PC and the camera, enabling two level of processing: primitive algorithms in the camera hardware and high-level processing in the PC. This camera hardware has successfully proven to be able to monitoring the plasma in several fusion devices for example at ASDEX Upgrade, KSTAR and COMPASS with the first version of firmware. A new firmware and software package is under development. It allows to detect predefined events in real time and therefore the camera is capable to change its own operation or to give warnings e.g. to the safety system of the experiment. The EDICAM system can handle a huge amount of data (up to TBs) with high data rate (950 MB/s) and will be used as the central element of the 10 camera overview video diagnostic system of Wendenstein 7-X (W7-X) stellarator. This paper presents key elements of the newly developed built-in intelligence stressing the revolutionary new features and the results of the test of the different software elements.

  14. Instrumentation optimization for positron emission mammography

    International Nuclear Information System (INIS)

    Moses, William W.; Qi, Jinyi

    2003-01-01

    The past several years have seen designs for PET cameras optimized to image the breast, commonly known as Positron Emission Mammography or PEM cameras. The guiding principal behind PEM instrumentation is that a camera whose field of view is restricted to a single breast has higher performance and lower cost than a conventional PET camera. The most common geometry is a pair of parallel planes of detector modules, although geometries that encircle the breast have also been proposed. The ability of the detector modules to measure the depth of interaction (DOI) is also a relevant feature. This paper finds that while both the additional solid angle coverage afforded by encircling the breast and the decreased blurring afforded by the DOI measurement improve performance, the ability to measure DOI is more important than the ability to encircle the breast

  15. Navigation accuracy comparing non-covered frame and use of plastic sterile drapes to cover the reference frame in 3D acquisition.

    Science.gov (United States)

    Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa

    2017-09-01

    Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far

  16. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    Science.gov (United States)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  17. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  18. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  19. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  20. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  1. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  2. Improved positron emission tomography camera

    International Nuclear Information System (INIS)

    Mullani, N.A.

    1986-01-01

    A positron emission tomography camera having a plurality of rings of detectors positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom, and a plurality of scintillation crystals positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring may be offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. (author)

  3. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  4. Reynolds Stress Closure for Inertial Frames and Rotating Frames

    Science.gov (United States)

    Petty, Charles; Benard, Andre

    2017-11-01

    In a rotating frame-of-reference, the Coriolis acceleration and the mean vorticity field have a profound impact on the redistribution of kinetic energy among the three components of the fluctuating velocity. Consequently, the normalized Reynolds (NR) stress is not objective. Furthermore, because the Reynolds stress is defined as an ensemble average of a product of fluctuating velocity vector fields, its eigenvalues must be non-negative for all turbulent flows. These fundamental properties (realizability and non-objectivity) of the NR-stress cannot be compromised in computational fluid dynamic (CFD) simulations of turbulent flows in either inertial frames or in rotating frames. The recently developed universal realizable anisotropic prestress (URAPS) closure for the NR-stress depends explicitly on the local mean velocity gradient and the Coriolis operator. The URAPS-closure is a significant paradigm shift from turbulent closure models that assume that dyadic-valued operators associated with turbulent fluctuations are objective.

  5. Evaluation of tomographic ISOCAM Park II gamma camera parameters using Monte Carlo method

    International Nuclear Information System (INIS)

    Oramas Polo, Ivón

    2015-01-01

    In this paper the evaluation of tomographic ISOCAM Park II gamma camera parameters was performed using the Monte Carlo code SIMIND. The parameters uniformity, resolution and contrast were evaluated by Jaszczak phantom simulation. In addition the qualitative assessment of the center of rotation was performed. The results of the simulation are compared and evaluated against the specifications of the manufacturer of the gamma camera and taking into account the National Protocol for Quality Control of Nuclear Medicine Instruments of the Cuban Medical Equipment Control Center. A computational Jaszczak phantom model with three different distributions of activity was obtained. They can be used to perform studies with gamma cameras. (author)

  6. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  7. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  8. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  9. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  10. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    Science.gov (United States)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  11. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  12. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  13. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  14. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  15. Safeguards instrumentation: past, present, future

    International Nuclear Information System (INIS)

    Higinbotham, W.A.

    1982-01-01

    Instruments are essential for accounting, for surveillance and for protection of nuclear materials. The development and application of such instrumentation is reviewed, with special attention to international safeguards applications. Active and passive nondestructive assay techniques are some 25 years of age. The important advances have been in learning how to use them effectively for specific applications, accompanied by major advances in radiation detectors, electronics, and, more recently, in mini-computers. The progress in seals has been disappointingly slow. Surveillance cameras have been widely used for many applications other than safeguards. The revolution in TV technology will have important implications. More sophisticated containment/surveillance equipment is being developed but has yet to be exploited. On the basis of this history, some expectations for instrumentation in the near future are presented

  16. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with 10 e-/pixel/second dark current, 25 e- read noise, a gain of 2.0 +/- 0.5 and 1.0 percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  17. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  18. Instrumental Capital

    Directory of Open Access Journals (Sweden)

    Gabriel Valerio

    2007-07-01

    Full Text Available During the history of human kind, since our first ancestors, tools have represented a mean to reach objectives which might otherwise seemed impossibles. In the called New Economy, where tangibles assets appear to be losing the role as the core element to produce value versus knowledge, tools have kept aside man in his dairy work. In this article, the author's objective is to describe, in a simple manner, the importance of managing the organization's group of tools or instruments (Instrumental Capital. The characteristic conditions of this New Economy, the way Knowledge Management deals with these new conditions and the sub-processes that provide support to the management of Instrumental Capital are described.

  19. Frames of exponentials:lower frame bounds for finite subfamilies, and approximation of the inverse frame operator

    DEFF Research Database (Denmark)

    Christensen, Ole; Lindner, Alexander M

    2001-01-01

    We give lower frame bounds for finite subfamilies of a frame of exponentials {e(i lambdak(.))}k is an element ofZ in L-2(-pi,pi). We also present a method for approximation of the inverse frame operator corresponding to {e(i lambdak(.))}k is an element ofZ, where knowledge of the frame bounds for...

  20. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  1. Innovative instrumentation

    International Nuclear Information System (INIS)

    Anon.

    1983-01-01

    At this year's particle physics conference at Brighton, a parallel session was given over to instrumentation and detector development. While this work is vital to the health of research and its continued progress, its share of prime international conference time is limited. Instrumentation can be innovative three times — first when a new idea is outlined, secondly when it is shown to be feasible, and finally when it becomes productive in a real experiment, amassing useful data rather than operational experience. Hyams' examples showed that it can take a long time for a new idea to filter through these successive stages, if it ever makes it at all

  2. Innovative instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1983-11-15

    At this year's particle physics conference at Brighton, a parallel session was given over to instrumentation and detector development. While this work is vital to the health of research and its continued progress, its share of prime international conference time is limited. Instrumentation can be innovative three times — first when a new idea is outlined, secondly when it is shown to be feasible, and finally when it becomes productive in a real experiment, amassing useful data rather than operational experience. Hyams' examples showed that it can take a long time for a new idea to filter through these successive stages, if it ever makes it at all.

  3. Instrumental aspects

    Directory of Open Access Journals (Sweden)

    Qureshi Navid

    2017-01-01

    Full Text Available Every neutron scattering experiment requires the choice of a suited neutron diffractometer (or spectrometer in the case of inelastic scattering with its optimal configuration in order to accomplish the experimental tasks in the most successful way. Most generally, the compromise between the incident neutron flux and the instrumental resolution has to be considered, which is depending on a number of optical devices which are positioned in the neutron beam path. In this chapter the basic instrumental principles of neutron diffraction will be explained. Examples of different types of experiments and their respective expectable results will be shown. Furthermore, the production and use of polarized neutrons will be stressed.

  4. 3D shape measurement for moving scenes using an interlaced scanning colour camera

    International Nuclear Information System (INIS)

    Cao, Senpeng; Cao, Yiping; Lu, Mingteng; Zhang, Qican

    2014-01-01

    A Fourier transform deinterlacing algorithm (FTDA) is proposed to eliminate the blurring and dislocation of the fringe patterns on a moving object captured by an interlaced scanning colour camera in phase measuring profilometry (PMP). Every frame greyscale fringe from three colour channels of every colour fringe is divided into even and odd field fringes respectively, each of which is respectively processed by FTDA. All of the six frames deinterlaced fringes from one colour fringe form two sets of three-step phase-shifted greyscale fringes, with which two 3D shapes corresponding to two different moments are reconstructed by PMP within a frame period. The deinterlaced fringe is identical with the exact frame fringe at the same moment theoretically. The simulation and experiments show its feasibility and validity. The method doubles the time resolution, maintains the precision of the traditional phase measurement profilometry, and has potential applications in the moving and online object’s 3D shape measurements. (paper)

  5. Another frame, another game? : Explaining framing effects in economic games

    NARCIS (Netherlands)

    Gerlach, Philipp; Jaeger, B.; Hopfensitz, A.; Lori, E.

    2016-01-01

    Small changes in the framing of games (i.e., the way in which the game situation is described to participants) can have large effects on players' choices. For example, referring to a prisoner's dilemma game as the "Community Game" as opposed to the "Wall Street Game" can double the cooperation rate

  6. C-connected frame congruences

    Directory of Open Access Journals (Sweden)

    Dharmanand Baboolal

    2017-01-01

    Full Text Available We discuss the congruences $theta$ that are connected as  elements of the (totally disconnected congruence frame $CF L$,  and show that they are in a one-to-one correspondence with the completely prime elements of $L$, giving an explicit formula. Then we investigate those frames $L$ with enough connected congruences to cover the whole of $CF L$. They are, among others, shown to be $T_D$-spatial;  characteristics for some special cases (Boolean, linear, scattered and Noetherian are presented.

  7. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  8. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  9. Ultrafast streak and framing technique for the observation of laser driven shock waves in transparent solid targets

    International Nuclear Information System (INIS)

    Van Kessel, C.G.M.; Sachsenmaier, P.; Sigel, R.

    1975-01-01

    Shock waves driven by laser ablation in plane transparent plexiglass and solid hydrogen targets have been observed with streak and framing techniques using a high speed image converter camera, and a dye laser as a light source. The framing pictures have been made by mode locking the dye laser and using a wide streak slit. In both materials a growing hemispherical shock wave is observed with the maximum velocity at the onset of laser radiation. (author)

  10. Surgical Instrument

    NARCIS (Netherlands)

    Dankelman, J.; Horeman, T.

    2009-01-01

    The present invention relates to a surgical instrument for minimall-invasive surgery, comprising a handle, a shaft and an actuating part, characterised by a gastight cover surrounding the shaft, wherein the cover is provided with a coupler that has a feed- through opening with a loskable seal,

  11. Weather Instruments.

    Science.gov (United States)

    Brantley, L. Reed, Sr.; Demanche, Edna L.; Klemm, E. Barbara; Kyselka, Will; Phillips, Edwin A.; Pottenger, Francis M.; Yamamoto, Karen N.; Young, Donald B.

    This booklet presents some activities to measure various weather phenomena. Directions for constructing a weather station are included. Instruments including rain gauges, thermometers, wind vanes, wind speed devices, humidity devices, barometers, atmospheric observations, a dustfall jar, sticky-tape can, detection of gases in the air, and pH of…

  12. Developments in analytical instrumentation

    Science.gov (United States)

    Petrie, G.

    The situation regarding photogrammetric instrumentation has changed quite dramatically over the last 2 or 3 years with the withdrawal of most analogue stereo-plotting machines from the market place and their replacement by analytically based instrumentation. While there have been few new developments in the field of comparators, there has been an explosive development in the area of small, relatively inexpensive analytical stereo-plotters based on the use of microcomputers. In particular, a number of new instruments have been introduced by manufacturers who mostly have not been associated previously with photogrammetry. Several innovative concepts have been introduced in these small but capable instruments, many of which are aimed at specialised applications, e.g. in close-range photogrammetry (using small-format cameras); for thematic mapping (by organisations engaged in environmental monitoring or resources exploitation); for map revision, etc. Another innovative and possibly significant development has been the production of conversion kits to convert suitable analogue stereo-plotting machines such as the Topocart, PG-2 and B-8 into fully fledged analytical plotters. The larger and more sophisticated analytical stereo-plotters are mostly being produced by the traditional mainstream photogrammetric systems suppliers with several new instruments and developments being introduced at the top end of the market. These include the use of enlarged photo stages to handle images up to 25 × 50 cm format; the complete integration of graphics workstations into the analytical plotter design; the introduction of graphics superimposition and stereo-superimposition; the addition of correlators for the automatic measurement of height, etc. The software associated with this new analytical instrumentation is now undergoing extensive re-development with the need to supply photogrammetric data as input to the more sophisticated G.I.S. systems now being installed by clients, instead

  13. Digital quality control of the camera computer interface

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1983-01-01

    A brief description is given of how the gamma camera-computer interface works and what kind of errors can occur. Quality control tests of the interface are then described which include 1) tests of static performance e.g. uniformity, linearity, 2) tests of dynamic performance e.g. basic timing, interface count-rate, system count-rate, 3) tests of special functions e.g. gated acquisition, 4) tests of the gamma camera head, and 5) tests of the computer software. The tests described are mainly acceptance and routine tests. Many of the tests discussed are those recommended by an IAEA Advisory Group for inclusion in the IAEA control schedules for nuclear medicine instrumentation. (U.K.)

  14. Status of the Dark Energy Survey Camera (DECam) Project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; Abbott, Timothy M.C.; Angstadt, Robert; Annis, Jim; Antonik, Michelle, L.; Bailey, Jim; Ballester, Otger.; Bernstein, Joseph P.; Bernstein, Rebbeca; Bonati, Marco; Bremer, Gale; /Fermilab /Cerro-Tololo InterAmerican Obs. /ANL /Texas A-M /Michigan U. /Illinois U., Urbana /Ohio State U. /University Coll. London /LBNL /SLAC /IFAE

    2012-06-29

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  15. Star camera aspect system suitable for use in balloon experiments

    International Nuclear Information System (INIS)

    Hunter, S.D.; Baker, R.G.

    1985-01-01

    A balloon-borne experiment containing a star camera aspect system was designed, built, and flown. This system was designed to provide offset corrections to the magnetometer and inclinometer readings used to control an azimuth and elevation pointed experiment. The camera is controlled by a microprocessor, including commendable exposure and noise rejection threshold, as well as formatting the data for telemetry to the ground. As a background program, the microprocessor runs the aspect program to analyze a fraction of the pictures taken so that aspect information and offset corrections are available to the experiment in near real time. The analysis consists of pattern recognition of the star field with a star catalog in ROM memory and a least squares calculation. The performance of this system in ground based tests is described. It is part of the NASA/GSFC High Energy Gamma-Ray Balloon Instrument (2)

  16. Picosecond X-ray streak camera dynamic range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Gontier, D.; Raimbourg, J.; Rubbelynck, C.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J.-P.; Goulmy, C. [Photonis SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-09-15

    Streak cameras are widely used to record the spatio-temporal evolution of laser-induced plasma. A prototype of picosecond X-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to answer the Laser MegaJoule specific needs. The dynamic range of this instrument is measured with picosecond X-ray pulses generated by the interaction of a laser beam and a copper target. The required value of 100 is reached only in the configurations combining the slowest sweeping speed and optimization of the streak tube electron throughput by an appropriate choice of high voltages applied to its electrodes.

  17. Status of the Dark Energy Survey Camera (DECam) project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; McLean, Ian S.; Ramsay, Suzanne K.; Abbott, Timothy M. C.; Angstadt, Robert; Takami, Hideki; Annis, Jim; Antonik, Michelle L.; Bailey, Jim; Ballester, Otger; Bernstein, Joseph P.; Bernstein, Rebecca A.; Bonati, Marco; Bremer, Gale; Briones, Jorge; Brooks, David; Buckley-Geer, Elizabeth J.; Campa, Juila; Cardiel-Sas, Laia; Castander, Francisco; Castilla, Javier; Cease, Herman; Chappa, Steve; Chi, Edward C.; da Costa, Luis; DePoy, Darren L.; Derylo, Gregory; de Vincente, Juan; Diehl, H. Thomas; Doel, Peter; Estrada, Juan; Eiting, Jacob; Elliott, Anne E.; Finley, David A.; Flores, Rolando; Frieman, Josh; Gaztanaga, Enrique; Gerdes, David; Gladders, Mike; Guarino, V.; Gutierrez, G.; Grudzinski, Jim; Hanlon, Bill; Hao, Jiangang; Holland, Steve; Honscheid, Klaus; Huffman, Dave; Jackson, Cheryl; Jonas, Michelle; Karliner, Inga; Kau, Daekwang; Kent, Steve; Kozlovsky, Mark; Krempetz, Kurt; Krider, John; Kubik, Donna; Kuehn, Kyler; Kuhlmann, Steve E.; Kuk, Kevin; Lahav, Ofer; Langellier, Nick; Lathrop, Andrew; Lewis, Peter M.; Lin, Huan; Lorenzon, Wolfgang; Martinez, Gustavo; McKay, Timothy; Merritt, Wyatt; Meyer, Mark; Miquel, Ramon; Morgan, Jim; Moore, Peter; Moore, Todd; Neilsen, Eric; Nord, Brian; Ogando, Ricardo; Olson, Jamieson; Patton, Kenneth; Peoples, John; Plazas, Andres; Qian, Tao; Roe, Natalie; Roodman, Aaron; Rossetto, B.; Sanchez, E.; Soares-Santos, Marcelle; Scarpine, Vic; Schalk, Terry; Schindler, Rafe; Schmidt, Ricardo; Schmitt, Richard; Schubnell, Mike; Schultz, Kenneth; Selen, M.; Serrano, Santiago; Shaw, Terri; Simaitis, Vaidas; Slaughter, Jean; Smith, R. Christopher; Spinka, Hal; Stefanik, Andy; Stuermer, Walter; Sypniewski, Adam; Talaga, R.; Tarle, Greg; Thaler, Jon; Tucker, Doug; Walker, Alistair R.; Weaverdyck, Curtis; Wester, William; Woods, Robert J.; Worswick, Sue; Zhao, Allen

    2012-09-24

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  18. Sparsity and spectral properties of dual frames

    DEFF Research Database (Denmark)

    Krahmer, Felix; Kutyniok, Gitta; Lemvig, Jakob

    2013-01-01

    We study sparsity and spectral properties of dual frames of a given finite frame. We show that any finite frame has a dual with no more than $n^2$ non-vanishing entries, where $n$ denotes the ambient dimension, and that for most frames no sparser dual is possible. Moreover, we derive an expressio...

  19. Some equalities and inequalities for fusion frames

    OpenAIRE

    Guo, Qianping; Leng, Jinsong; Li, Houbiao

    2016-01-01

    Fusion frames have some properties similar to those of frames in Hilbert spaces, but not all of their properties are similar. Some authors have established some equalities and inequalities for conventional frames. In this paper, we give some equalities and inequalities for fusion frames. Our results generalize and improve the remarkable results which have been obtained by Balan, Casazza and G?vruta etc.

  20. 49 CFR 393.201 - Frames.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Frames. 393.201 Section 393.201 Transportation... SAFE OPERATION Frames, Cab and Body Components, Wheels, Steering, and Suspension Systems § 393.201 Frames. (a) The frame or chassis of each commercial motor vehicle shall not be cracked, loose, sagging or...

  1. Key Frame Extraction in the Summary Space.

    Science.gov (United States)

    Li, Xuelong; Zhao, Bin; Lu, Xiaoqiang; Xuelong Li; Bin Zhao; Xiaoqiang Lu; Lu, Xiaoqiang; Li, Xuelong; Zhao, Bin

    2018-06-01

    Key frame extraction is an efficient way to create the video summary which helps users obtain a quick comprehension of the video content. Generally, the key frames should be representative of the video content, meanwhile, diverse to reduce the redundancy. Based on the assumption that the video data are near a subspace of a high-dimensional space, a new approach, named as key frame extraction in the summary space, is proposed for key frame extraction in this paper. The proposed approach aims to find the representative frames of the video and filter out similar frames from the representative frame set. First of all, the video data are mapped to a high-dimensional space, named as summary space. Then, a new representation is learned for each frame by analyzing the intrinsic structure of the summary space. Specifically, the learned representation can reflect the representativeness of the frame, and is utilized to select representative frames. Next, the perceptual hash algorithm is employed to measure the similarity of representative frames. As a result, the key frame set is obtained after filtering out similar frames from the representative frame set. Finally, the video summary is constructed by assigning the key frames in temporal order. Additionally, the ground truth, created by filtering out similar frames from human-created summaries, is utilized to evaluate the quality of the video summary. Compared with several traditional approaches, the experimental results on 80 videos from two datasets indicate the superior performance of our approach.

  2. First Light with a 67-Million-Pixel WFI Camera

    Science.gov (United States)

    1999-01-01

    The newest astronomical instrument at the La Silla observatory is a super-camera with no less than sixty-seven million image elements. It represents the outcome of a joint project between the European Southern Observatory (ESO) , the Max-Planck-Institut für Astronomie (MPI-A) in Heidelberg (Germany) and the Osservatorio Astronomico di Capodimonte (OAC) near Naples (Italy), and was installed at the 2.2-m MPG/ESO telescope in December 1998. Following careful adjustment and testing, it has now produced the first spectacular test images. With a field size larger than the Full Moon, the new digital Wide Field Imager is able to obtain detailed views of extended celestial objects to very faint magnitudes. It is the first of a new generation of survey facilities at ESO with which a variety of large-scale searches will soon be made over extended regions of the southern sky. These programmes will lead to the discovery of particularly interesting and unusual (rare) celestial objects that may then be studied with large telescopes like the VLT at Paranal. This will in turn allow astronomers to penetrate deeper and deeper into the many secrets of the Universe. More light + larger fields = more information! The larger a telescope is, the more light - and hence information about the Universe and its constituents - it can collect. This simple truth represents the main reason for building ESO's Very Large Telescope (VLT) at the Paranal Observatory. However, the information-gathering power of astronomical equipment can also be increased by using a larger detector with more image elements (pixels) , thus permitting the simultaneous recording of images of larger sky fields (or more details in the same field). It is for similar reasons that many professional photographers prefer larger-format cameras and/or wide-angle lenses to the more conventional ones. The Wide Field Imager at the 2.2-m telescope Because of technological limitations, the sizes of detectors most commonly in use in

  3. Framing and misperception in public good experiments

    DEFF Research Database (Denmark)

    Fosgaard, Toke Reinholt; Hansen, Lars Gårn; Wengström, Erik Roland

    2017-01-01

    Earlier studies have found that framing has substantial impact on the degree of cooperation observed in public good experiments. We show that the way the public good game is framed affects misperceptions about the incentives of the game. Moreover, we show that such framing-induced differences...... in misperceptions are linked to the framing effect on subjects' cooperation behavior. When we do not control for the different levels of misperceptions between frames, we observe a significant framing effect on subjects’ cooperation preferences. However, this framing effect becomes insignificant once we remove...

  4. Collimator changer for scintillation camera

    International Nuclear Information System (INIS)

    Jupa, E.C.; Meeder, R.L.; Richter, E.K.

    1976-01-01

    A collimator changing assembly mounted on the support structure of a scintillation camera is described. A vertical support column positioned proximate the detector support column with a plurality of support arms mounted thereon in a rotatable cantilevered manner at separate vertical positions. Each support arm is adapted to carry one of the plurality of collimators which are interchangeably mountable on the underside of the detector and to transport the collimator between a store position remote from the detector and a change position underneath said detector

  5. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  6. Media framing and social movements

    NARCIS (Netherlands)

    Vliegenthart, R.; Snow, D.A.; Della Porta, D.; Klandermans, B.; McAdam, D.

    2013-01-01

    In their study of media content, mass communication scholars commonly rely on Entman's (1993: 52) definition of framing: "[selecting] some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal

  7. On framed simple Lie groups

    OpenAIRE

    MINAMI, Haruo

    2016-01-01

    For a compact simple Lie group $G$, we show that the element $[G, \\mathcal{L}] \\in \\pi^S_*(S^0)$ represented by the pair $(G, \\mathcal{L})$ is zero, where $\\mathcal{L}$ denotes the left invariant framing of $G$. The proof relies on the method of E. Ossa [Topology, 21 (1982), 315–323].

  8. Handedness differences in information framing.

    Science.gov (United States)

    Jasper, John D; Fournier, Candice; Christman, Stephen D

    2014-02-01

    Previous research has shown that strength of handedness predicts differences in sensory illusions, Stroop interference, episodic memory, and beliefs about body image. Recent evidence also suggests handedness differences in the susceptibility to common decision biases such as anchoring and sunk cost. The present paper extends this line of work to attribute framing effects. Sixty-three undergraduates were asked to advise a friend concerning the use of a safe allergy medication during pregnancy. A third of the participants received negatively-framed information concerning the fetal risk of the drug (1-3% chance of having a malformed child); another third received positively-framed information (97-99% chance of having a normal child); and the final third received no counseling information and served as the control. Results indicated that, as predicted, inconsistent (mixed)-handers were more responsive than consistent (strong)-handers to information changes and readily update their beliefs. Although not significant, the data also suggested that only inconsistent handers were affected by information framing. Theoretical implications as well as ongoing work in holistic versus analytic processing, contextual sensitivity, and brain asymmetry will be discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Meta framing and polyphonic structures

    DEFF Research Database (Denmark)

    Pedersen, Karsten

    2017-01-01

    in various ways in BT’s 2012 coverage of a doping case involving Riis. In this article I investigate the way in which BT meta frames itself and its own actions in order to show and underline the seriousness with which BT treats sports journalism. The study is part of a recurring Danish project harvesting...

  10. Frame Rate and Human Vision

    Science.gov (United States)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  11. Reference frame for Product Configuration

    DEFF Research Database (Denmark)

    Ladeby, Klaes Rohde; Oddsson, Gudmundur Valur

    2011-01-01

    a reference frame for configuration that permits 1) a more precise understanding of a configuration system, 2) a understanding of how the configuration system relate to other systems, and 3) a definition of the basic concepts in configuration. The total configuration system, together with the definition...

  12. Plasma physics in noninertial frames

    International Nuclear Information System (INIS)

    Thyagaraja, A.; McClements, K. G.

    2009-01-01

    Equations describing the nonrelativistic motion of a charged particle in an arbitrary noninertial reference frame are derived from the relativistically invariant form of the particle action. It is shown that the equations of motion can be written in the same form in inertial and noninertial frames, with the effective electric and magnetic fields in the latter modified by inertial effects associated with centrifugal and Coriolis accelerations. These modifications depend on the particle charge-to-mass ratio, and also the vorticity, specific kinetic energy, and compressibility of the frame flow. The Newton-Lorentz, Vlasov, and Fokker-Planck equations in such a frame are derived. Reduced models such as gyrokinetic, drift-kinetic, and fluid equations are then derivable from these equations in the appropriate limits, using standard averaging procedures. The results are applied to tokamak plasmas rotating about the machine symmetry axis with a nonrelativistic but otherwise arbitrary toroidal flow velocity. Astrophysical applications of the analysis are also possible since the power of the action principle is such that it can be used to describe relativistic flows in curved spacetime.

  13. Framing the future of fracking

    NARCIS (Netherlands)

    Metze, Tamara

    2017-01-01

    Hydraulic fracturing is a technology developed to improve and increase the production of natural gas. In many countries, including the Netherlands, it has caused environmental controversies. In these controversies, 'futurity framing' may open up debates for alternative paradigms such as

  14. Frames and generalized shift-invariant systems

    DEFF Research Database (Denmark)

    Christensen, Ole

    2004-01-01

    With motivation from the theory of Hilbert-Schmidt operators we review recent topics concerning frames in L 2 (R) and their duals. Frames are generalizations of orthonormal bases in Hilbert spaces. As for an orthonormal basis, a frame allows each element in the underlying Hilbert space...... to be written as an unconditionally convergent infinite linear combination of the frame elements; however, in contrast to the situation for a basis, the coefficients might not be unique. We present the basic facts from frame theory and the motivation for the fact that most recent research concentrates on tight...... frames or dual frame pairs rather than general frames and their canonical dual. The corresponding results for Gabor frames and wavelet frames are discussed in detail....

  15. On the structures of Grassmannian frames

    OpenAIRE

    Haas IV, John I.; Casazza, Peter G.

    2017-01-01

    A common criterion in the design of finite Hilbert space frames is minimal coherence, as this leads to error reduction in various signal processing applications. Frames that achieve minimal coherence relative to all unit-norm frames are called Grassmannian frames, a class which includes the well-known equiangular tight frames. However, the notion of "coherence minimization" varies according to the constraints of the ambient optimization problem, so there are other types of "minimally coherent...

  16. Counting neutrons with a commercial S-CMOS camera

    Directory of Open Access Journals (Sweden)

    Patrick Van Esch

    2018-01-01

    Full Text Available It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable “neutron impact” data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has

  17. Counting neutrons with a commercial S-CMOS camera

    Science.gov (United States)

    Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega

    2018-01-01

    It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.

  18. Performance and quality control of nuclear medicine instrumentation

    International Nuclear Information System (INIS)

    Paras, P.

    1981-01-01

    The status and the recent developments of nuclear medicine instrumentation performance, with an emphasis on gamma-camera performance, are discussed as the basis for quality control. New phantoms and techniques for the measurement of gamma-camera performance parameters are introduced and their usefulness for quality control is discussed. Tests and procedures for dose calibrator quality control are included. Also, the principles of quality control, tests, equipment and procedures for each type of instrument are reviewed, and minimum requirements for an effective quality assurance programme for nuclear medicine instrumentation are suggested. (author)

  19. Nuclear instrumentation

    International Nuclear Information System (INIS)

    Weill, Jacky; Fabre, Rene.

    1981-01-01

    This article sums up the Research and Development effort at present being carried out in the five following fields of applications: Health physics and Radioprospection, Control of nuclear reactors, Plant control (preparation and reprocessing of the fuel, testing of nuclear substances, etc.), Research laboratory instrumentation, Detectors. It also sets the place of French industrial activities by means of an estimate of the French market, production and flow of trading with other countries [fr

  20. Divided Instruments

    Science.gov (United States)

    Chapman, A.; Murdin, P.

    2000-11-01

    Although the division of the zodiac into 360° probably derives from Egypt or Assyria around 2000 BC, there is no surviving evidence of Mesopotamian cultures embodying this division into a mathematical instrument. Almost certainly, however, it was from Babylonia that the Greek geometers learned of the 360° circle, and by c. 80 BC they had incorporated it into that remarkably elaborate device gener...

  1. Instrumentation development

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    Areas being investigated for instrumentation improvement during low-level pollution monitoring include laser opto-acoustic spectroscopy, x-ray fluorescence spectroscopy, optical fluorescence spectroscopy, liquid crystal gas detectors, advanced forms of atomic absorption spectroscopy, electro-analytical chemistry, and mass spectroscopy. Emphasis is also directed toward development of physical methods, as opposed to conventional chemical analysis techniques for monitoring these trace amounts of pollution related to energy development and utilization

  2. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  3. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  4. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  5. Instrumentation maintenance

    International Nuclear Information System (INIS)

    Mack, D.A.

    1976-09-01

    It is essential to any research activity that accurate and efficient measurements be made for the experimental parameters under consideration for each individual experiment or test. Satisfactory measurements in turn depend upon having the necessary instruments and the capability of ensuring that they are performing within their intended specifications. This latter requirement can only be achieved by providing an adequate maintenance facility, staffed with personnel competent to understand the problems associated with instrument adjustment and repair. The Instrument Repair Shop at the Lawrence Berkeley Laboratory is designed to achieve this end. The organization, staffing and operation of this system is discussed. Maintenance policy should be based on studies of (1) preventive vs. catastrophic maintenance, (2) records indicating when equipment should be replaced rather than repaired and (3) priorities established to indicate the order in which equipment should be repaired. Upon establishing a workable maintenance policy, the staff should be instructed so that they may provide appropriate scheduled preventive maintenance, calibration and corrective procedures, and emergency repairs. The education, training and experience of the maintenance staff is discussed along with the organization for an efficient operation. The layout of the various repair shops is described in the light of laboratory space and financial constraints

  6. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  7. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    Science.gov (United States)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  8. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  9. A passive terahertz video camera based on lumped element kinetic inductance detectors

    International Nuclear Information System (INIS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-01-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  10. A passive terahertz video camera based on lumped element kinetic inductance detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Wood, Ken [QMC Instruments Ltd., School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Grainger, William [Rutherford Appleton Laboratory, STFC, Swindon SN2 1SZ (United Kingdom); Mauskopf, Philip [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); School of Earth Science and Space Exploration, Arizona State University, Tempe, Arizona 85281 (United States); Spencer, Locke [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada)

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  11. Face antispoofing based on frame difference and multilevel representation

    Science.gov (United States)

    Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad

    2017-07-01

    Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.

  12. Artificial frame filling using adaptive neural fuzzy inference system for particle image velocimetry dataset

    Science.gov (United States)

    Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer

    2015-03-01

    Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.

  13. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    International Nuclear Information System (INIS)

    Benitez, D; Gaydecki, P; Quek, S; Torres, V

    2007-01-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research

  14. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    Science.gov (United States)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  15. Capstan to be used with a camera for rapid cycling bubble chambers

    CERN Document Server

    CERN PhotoLab

    1978-01-01

    To achieve the high speed film transport required for high camera rate (15 and 25 Hz, for LEBC and RCBC respectively) a new drive mechanism was developed, which moved the frames (up to about 110 mm x 90 mm) by rotating a capstan stepwise through 60 deg, to bring the next face into position for photography (see also photo 7801001). Details are given for instance in J.L. Benichou et al. Nucl. Instrum. Methods 190 (1981) 487

  16. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  17. Predicting the Strength of Online News Frames

    Directory of Open Access Journals (Sweden)

    Hrvoje Jakopović

    2017-10-01

    Full Text Available Framing theory is one of the most significant approaches to understanding media and their potential impact on publics. Leaving aside that fact, the author finds that publicity effects seem to be dispersed and difficult to catch for public relations. This article employs a specific research design, which could be applied to public relations practice, namely with a view to observing correlations between specific media frames and individual frames. The approach is based on the typology of news frames. The author attributes negative, positive and neutral determinants to the types of frames in his empirical research. Online news regarding three transport organizations and the accompanying user comments (identified as negative, positive and neutral are analysed by means of the method of content and sentiment analysis. The author recognizes user comments and reviews as individual frames that take part in the creation of online image. Furthermore, he identifies the types of media frames as well as individual frames manifested as image, and undertakes correlation research in order to establish their prediction potential. The results expose the most frequently used types of media frames concerning the transport domain. The media are keen to report through the attribution of responsibility frame, and after that, through the economic frame and the conflict frame, but, on the other hand, they tend to neglect the human interest frame and the morality frame. The results show that specific types of news frames enable better prediction of user reactions. The economic frame and the human interest frame therefore represent the most predictable types of frame.

  18. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    Science.gov (United States)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  19. Sovereignty Frames and Sovereignty Claims

    OpenAIRE

    Walker, Neil

    2013-01-01

    This essay argues that much of the contemporary confusion and controversy over the meaning and continuing utility of the concept of sovereignty stems from a failure to distinguish between sovereignty as a deep framing device for making sense of the modern legal and political word on the one hand, and the particular claims which are made on behalf of particular institutions, agencies, rules or other entities to possess sovereign authority on the other. The essay begins by providing a basic acc...

  20. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....