WorldWideScience

Sample records for instrument framing camera

  1. Solid-state framing camera with multiple time frames

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  2. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  3. Triggered streak and framing rotating-mirror cameras

    International Nuclear Information System (INIS)

    Huston, A.E.; Tabrar, A.

    1975-01-01

    A pulse motor has been developed which enables a mirror to be rotated to speeds in excess of 20,000 rpm with 10 -4 s. High-speed cameras of both streak and framing type have been assembled which incorporate this mirror drive, giving streak writing speeds up to 2,000ms -1 , and framing speeds up to 500,000 frames s -1 , in each case with the capability of triggering the camera from the event under investigation. (author)

  4. 100-ps framing-camera tube

    International Nuclear Information System (INIS)

    Kalibjian, R.

    1978-01-01

    The optoelectronic framing-camera tube described is capable of recording two-dimensional image frames with high spatial resolution in the <100-ps range. Framing is performed by streaking a two-dimensional electron image across narrow slits. The resulting dissected electron line images from the slits are restored into framed images by a restorer deflector operating synchronously with the dissector deflector. The number of framed images on the tube's viewing screen equals the number of dissecting slits in the tube. Performance has been demonstrated in a prototype tube by recording 135-ps-duration framed images of 2.5-mm patterns at the cathode. The limitation in the framing speed is in the external drivers for the deflectors and not in the tube design characteristics. Faster frame speeds in the <100-ps range can be obtained by use of faster deflection drivers

  5. Development and Performance of Bechtel Nevada's Nine-Frame Camera System

    International Nuclear Information System (INIS)

    S. A. Baker; M. J. Griffith; J. L. Tybo

    2002-01-01

    Bechtel Nevada, Los Alamos Operations, has developed a high-speed, nine-frame camera system that records a sequence from a changing or dynamic scene. The system incorporates an electrostatic image tube with custom gating and deflection electrodes. The framing tube is shuttered with high-speed gating electronics, yielding frame rates of up to 5MHz. Dynamic scenes are lens-coupled to the camera, which contains a single photocathode gated on and off to control each exposure time. Deflection plates and drive electronics move the frames to different locations on the framing tube output. A single charge-coupled device (CCD) camera then records the phosphor image of all nine frames. This paper discusses setup techniques to optimize system performance. It examines two alternate philosophies for system configuration and respective performance results. We also present performance metrics for system evaluation, experimental results, and applications to four-frame cameras

  6. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  7. Cheetah: A high frame rate, high resolution SWIR image camera

    Science.gov (United States)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  8. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    Science.gov (United States)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  9. Development of an all-optical framing camera and its application on the Z-pinch.

    Science.gov (United States)

    Song, Yan; Peng, Bodong; Wang, Hong-Xing; Song, Guzhou; Li, Binkang; Yue, Zhiqin; Li, Yang; Sun, Tieping; Xu, Qing; Ma, Jiming; Sheng, Liang; Han, Changcai; Duan, Baojun; Yao, Zhiming; Yan, Weipeng

    2017-12-11

    An all-optical framing camera has been developed which measures the spatial profile of photons flux by utilizing a laser beam to probe the refractive index change in an indium phosphide semiconductor. This framing camera acquires two frames with the time resolution of about 1.5 ns and the inter frame separation time of about 13 ns by angularly multiplexing the probe beam on to the semiconductor. The spatial resolution of this camera has been estimated to be about 140 μm and the spectral response of this camera has also been theoretically investigated in 5 eV-100 KeV range. This camera has been applied in investigating the imploding dynamics of the molybdenum planar wire array Z-pinch on the 1-MA "QiangGuang-1" facility. This framing camera can provide an alternative scheme for high energy density physics experiments.

  10. Framing-camera tube developed for sub-100-ps range

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    A new framing-camera tube, developed by Electronics Engineering, is capable of recording two-dimensional image frames with high spatial resolution in the sub-100-ps range. Framing is performed by streaking a two-dimensional electron image across narrow slits; the resulting electron-line images from the slits are restored into a framed image by a restorer deflector operating synchronously with the dissector deflector. We have demonstrated its performance in a prototype tube by recording 125-ps-duration framed images of 2.5-mm patterns. The limitation in the framing speed is in the external electronic drivers for the deflectors and not in the tube design characteristics. Shorter frame durations (below 100 ps) can be obtained by use of faster deflection drivers

  11. 100ps UV/x-ray framing camera

    International Nuclear Information System (INIS)

    Eagles, R.T.; Freeman, N.J.; Allison, J.M.; Sibbett, W.; Sleat, W.E.; Walker, D.R.

    1988-01-01

    The requirement for a sensitive two-dimensional imaging diagnostic with picosecond time resolution, particularly in the study of laser-produced plasmas, has previously been discussed. A temporal sequence of framed images would provide useful supplementary information to that provided by time resolved streak images across a spectral region of interest from visible to x-ray. To fulfill this requirement the Picoframe camera system has been developed. Results pertaining to the operation of a camera having S20 photocathode sensitivity are reviewed and the characteristics of an UV/x-ray sensitive version of the Picoframe system are presented

  12. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  13. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  14. X-ray framing cameras for > 5 keV imaging

    International Nuclear Information System (INIS)

    Landen, O.L.; Bell, P.M.; Costa, R.; Kalantar, D.H.; Bradley, D.K.

    1995-01-01

    Recent and proposed improvements in spatial resolution, temporal resolution, contrast, and detection efficiency for x-ray framing cameras are discussed in light of present and future laser-plasma diagnostic needs. In particular, improvements in image contrast above hard x-ray background levels is demonstrated by using high aspect ratio tapered pinholes

  15. Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains

    Energy Technology Data Exchange (ETDEWEB)

    Lumpkin, A. H. [Fermilab; Edstrom Jr., D. [Fermilab; Ruan, J. [Fermilab

    2016-10-09

    We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.

  16. REFLECTANCE CALIBRATION SCHEME FOR AIRBORNE FRAME CAMERA IMAGES

    Directory of Open Access Journals (Sweden)

    U. Beisl

    2012-07-01

    Full Text Available The image quality of photogrammetric images is influenced by various effects from outside the camera. One effect is the scattered light from the atmosphere that lowers contrast in the images and creates a colour shift towards the blue. Another is the changing illumination during the day which results in changing image brightness within an image block. In addition, there is the so-called bidirectional reflectance of the ground (BRDF effects that is giving rise to a view and sun angle dependent brightness gradient in the image itself. To correct for the first two effects an atmospheric correction with reflectance calibration is chosen. The effects have been corrected successfully for ADS linescan sensor data by using a parametrization of the atmospheric quantities. Following Kaufman et al. the actual atmospheric condition is estimated by the brightness of a dark pixel taken from the image. The BRDF effects are corrected using a semi-empirical modelling of the brightness gradient. Both methods are now extended to frame cameras. Linescan sensors have a viewing geometry that is only dependent from the cross track view zenith angle. The difference for frame cameras now is to include the extra dimension of the view azimuth into the modelling. Since both the atmospheric correction and the BRDF correction require a model inversion with the help of image data, a different image sampling strategy is necessary which includes the azimuth angle dependence. For the atmospheric correction a sixth variable is added to the existing five variables visibility, view zenith angle, sun zenith angle, ground altitude, and flight altitude – thus multiplying the number of modelling input combinations for the offline-inversion. The parametrization has to reflect the view azimuth angle dependence. The BRDF model already contains the view azimuth dependence and is combined with a new sampling strategy.

  17. Overview of the ARGOS X-ray framing camera for Laser MegaJoule

    Energy Technology Data Exchange (ETDEWEB)

    Trosseille, C., E-mail: clement.trosseille@cea.fr; Aubert, D.; Auger, L.; Bazzoli, S.; Brunel, P.; Burillo, M.; Chollet, C.; Jasmin, S.; Maruenda, P.; Moreau, I.; Oudot, G.; Raimbourg, J.; Soullié, G.; Stemmler, P.; Zuber, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Beck, T. [CEA, DEN, CADARACHE, F-13108 St Paul lez Durance (France); Gazave, J. [CEA, DAM, CESTA, F-33116 Le Barp (France)

    2014-11-15

    Commissariat à l’Énergie Atomique et aux Énergies Alternatives has developed the ARGOS X-ray framing camera to perform two-dimensional, high-timing resolution imaging of an imploding target on the French high-power laser facility Laser MegaJoule. The main features of this camera are: a microchannel plate gated X-ray detector, a spring-loaded CCD camera that maintains proximity focus in any orientation, and electronics packages that provide remotely-selectable high-voltages to modify the exposure-time of the camera. These components are integrated into an “air-box” that protects them from the harsh environmental conditions. A miniaturized X-ray generator is also part of the device for in situ self-testing purposes.

  18. High-speed two-frame gated camera for parameters measurement of Dragon-Ⅰ LIA

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Wang Yuan; Zhang Kaizhi; Shi Jinshui; Deng Jianjun; Li Jin

    2012-01-01

    The time-resolved measurement system which can work at very high speed is necessary in electron beam parameter diagnosis for Dragon-Ⅰ linear induction accelerator (LIA). A two-frame gated camera system has been developed and put into operation. The camera system adopts the optical principle of splitting the imaging light beam into two parts in the imaging space of a lens with long focus length. It includes lens coupled gated image intensifier, CCD camera, high speed shutter trigger device based on large scale field programmable gate array. The minimum exposure time for each image is about 3 ns, and the interval time between two images can be adjusted with a step of about 0.5 ns. The exposure time and the interval time can be independently adjusted and can reach about 1 s. The camera system features good linearity, good response uniformity, equivalent background illumination (EBI) as low as about 5 electrons per pixel per second, large adjustment range of sensitivity, and excel- lent flexibility and adaptability in applications. The camera system can capture two frame images at one time with the image size of 1024 x 1024. It meets the requirements of measurement for Dragon-Ⅰ LIA. (authors)

  19. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging.

    Science.gov (United States)

    Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P

    2016-02-01

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.

  20. Development of a dual MCP framing camera for high energy x-rays

    Energy Technology Data Exchange (ETDEWEB)

    Izumi, N., E-mail: izumi2@llnl.gov; Hall, G. N.; Carpenter, A. C.; Allen, F. V.; Cruz, J. G.; Felker, B.; Hargrove, D.; Holder, J.; Lumbard, A.; Montesanti, R.; Palmer, N. E.; Piston, K.; Stone, G.; Thao, M.; Vern, R.; Zacharias, R.; Landen, O. L.; Tommasini, R.; Bradley, D. K.; Bell, P. M. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); and others

    2014-11-15

    Recently developed diagnostic techniques at LLNL require recording backlit images of extremely dense imploded plasmas using hard x-rays, and demand the detector to be sensitive to photons with energies higher than 50 keV [R. Tommasini et al., Phys. Phys. Plasmas 18, 056309 (2011); G. N. Hall et al., “AXIS: An instrument for imaging Compton radiographs using ARC on the NIF,” Rev. Sci. Instrum. (these proceedings)]. To increase the sensitivity in the high energy region, we propose to use a combination of two MCPs. The first MCP is operated in a low gain regime and works as a thick photocathode, and the second MCP works as a high gain electron multiplier. We tested the concept of this dual MCP configuration and succeeded in obtaining a detective quantum efficiency of 4.5% for 59 keV x-rays, 3 times larger than with a single plate of the thickness typically used in NIF framing cameras.

  1. Gain attenuation of gated framing camera

    International Nuclear Information System (INIS)

    Xiao Shali; Liu Shenye; Cao Zhurong; Li Hang; Zhang Haiying; Yuan Zheng; Wang Liwei

    2009-01-01

    The theoretic model of framing camera's gain attenuation is analyzed. The exponential attenuation curve of the gain along the pulse propagation time is simulated. An experiment to measure the coefficient of gain attenuation based on the gain attenuation theory is designed. Experiment result shows that the gain follows an exponential attenuation rule with a quotient of 0.0249 nm -1 , the attenuation coefficient of the pulse is 0.00356 mm -1 . The loss of the pulse propagation along the MCP stripline is the leading reason of gain attenuation. But in the figure of a single stripline, the gain dose not follow the rule of exponential attenuation completely, instead, there is a gain increase at the stripline bottom. That is caused by the reflection of the pulse. The reflectance is about 24.2%. Combining the experiment and theory, which design of the stripline MCP can improved the gain attenuation. (authors)

  2. Distant Measurement of Plethysmographic Signal in Various Lighting Conditions Using Configurable Frame-Rate Camera

    Directory of Open Access Journals (Sweden)

    Przybyło Jaromir

    2016-12-01

    Full Text Available Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm for fluorescent light to 6.6 bpm for dim daylight.

  3. Ceres Photometry and Albedo from Dawn Framing Camera Images

    Science.gov (United States)

    Schröder, S. E.; Mottola, S.; Keller, H. U.; Li, J.-Y.; Matz, K.-D.; Otto, K.; Roatsch, T.; Stephan, K.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    The Dawn spacecraft is in orbit around dwarf planet Ceres. The onboard Framing Camera (FC) [1] is mapping the surface through a clear filter and 7 narrow-band filters at various observational geometries. Generally, Ceres' appearance in these images is affected by shadows and shading, effects which become stronger for larger solar phase angles, obscuring the intrinsic reflective properties of the surface. By means of photometric modeling we attempt to remove these effects and reconstruct the surface albedo over the full visible wavelength range. Knowledge of the albedo distribution will contribute to our understanding of the physical nature and composition of the surface.

  4. The Television Framing Methods of the National Basketball Association: An Agenda-Setting Application.

    Science.gov (United States)

    Fortunato, John A.

    2001-01-01

    Identifies and analyzes the exposure and portrayal framing methods that are utilized by the National Basketball Association (NBA). Notes that key informant interviews provide insight into the exposure framing method and reveal two portrayal instruments: cameras and announcers; and three framing strategies: depicting the NBA as a team game,…

  5. POINT CLOUD DERIVED FROMVIDEO FRAMES: ACCURACY ASSESSMENT IN RELATION TO TERRESTRIAL LASER SCANNINGAND DIGITAL CAMERA DATA

    Directory of Open Access Journals (Sweden)

    P. Delis

    2017-02-01

    Full Text Available The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object and Sony NEX-VG10 E (for the historic building. In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  6. Imacon 600 ultrafast streak camera evaluation

    International Nuclear Information System (INIS)

    Owen, T.C.; Coleman, L.W.

    1975-01-01

    The Imacon 600 has a number of designed in disadvantages for use as an ultrafast diagnostic instrument. The unit is physically large (approximately 5' long) and uses an external power supply rack for the image intensifier. Water cooling is required for the intensifier; it is quiet but not conducive to portability. There is no interlock on the cooling water. The camera does have several switch selectable sweep speeds. This is desirable if one is working with both slow and fast events. The camera can be run in a framing mode. (MOW)

  7. Implementation of 40-ps high-speed gated-microchannel-plate based x-ray framing cameras on reentrant SIM's for Nova

    International Nuclear Information System (INIS)

    Bell, P.M.; Kilkenny, J.D.; Landen, O.; Bradley, D.K.

    1994-01-01

    Gated framing cameras used in diagnosing laser produced plasmas have been used on the Nova laser system since 1987. There have been many variations of these systems implemented. All of these cameras have been ultimately limited in response time for two reasons. One being the electrical gating amplitude verses the gate width, this has always limited the detectable gain in the system. The second being the length to diameter (l/d) ratio of standard off the shelf microchannel plates (MCP). This sets the minimum electrical gate pulse that will give detectable gain from a given microchannel plate. The authors have implemented two different types of 40 ps framing camera configurations on the Nova laser system. They will describe the configurations of both systems as well as discuss the advantages of each

  8. Development of a visible framing camera diagnostic for the study of current initiation in z-pinch plasmas

    International Nuclear Information System (INIS)

    Muron, D.J.; Hurst, M.J.; Derzon, M.S.

    1996-01-01

    The authors assembled and tested a visible framing camera system to take 5 ns FWHM images of the early time emission from a z-pinch plasma. This diagnostic was used in conjunction with a visible streak camera allowing early time emissions measurements to diagnose current initiation. Individual frames from gated image intensifiers were proximity coupled to charge injection device (CID) cameras and read out at video rate and 8-bit resolution. A mirror was used to view the pinch from a 90-degree angle. The authors observed the destruction of the mirror surface, due to the high surface heating, and the subsequent reduction in signal reflected from the mirror. Images were obtained that showed early time ejecta and a nonuniform emission from the target. This initial test of the equipment highlighted problems with this measurement. They observed non-uniformities in early time emission. This is believed to be due to either spatially varying current density or heating of the foam. Images were obtained that showed early time ejecta from the target. The results and suggestions for improvement are discussed in the text

  9. Six-frame picosecond radiation camera based on hydrated electron photoabsorption phenomena

    International Nuclear Information System (INIS)

    Coutts, G.W.; Olk, L.B.; Gates, H.A.; St Leger-Barter, G.

    1977-01-01

    To obtain picosecond photographs of nanosecond radiation sources, a six-frame ultra-high speed radiation camera based on hydrated electron absorption phenomena has been developed. A time-dependent opacity pattern is formed in an acidic aqueous cell by a pulsed radiation source. Six time-resolved picosecond images of this changing opacity pattern are transferred to photographic film with the use of a mode-locked dye laser and six electronically gated microchannel plate image intensifiers. Because the lifetime of the hydrated electron absorption centers can be reduced to picoseconds, the opacity patterns represent time-space pulse profile images

  10. New Sensors for Cultural Heritage Metric Survey: The ToF Cameras

    Directory of Open Access Journals (Sweden)

    Filiberto Chiabrando

    2011-12-01

    Full Text Available ToF cameras are new instruments based on CCD/CMOS sensors which measure distances instead of radiometry. The resulting point clouds show the same properties (both in terms of accuracy and resolution of the point clouds acquired by means of traditional LiDAR devices. ToF cameras are cheap instruments (less than 10.000 € based on video real time distance measurements and can represent an interesting alternative to the more expensive LiDAR instruments. In addition, the limited weight and dimensions of ToF cameras allow a reduction of some practical problems such as transportation and on-site management. Most of the commercial ToF cameras use the phase-shift method to measure distances. Due to the use of only one wavelength, most of them have limited range of application (usually about 5 or 10 m. After a brief description of the main characteristics of these instruments, this paper explains and comments the results of the first experimental applications of ToF cameras in Cultural Heritage 3D metric survey.  The possibility to acquire more than 30 frames/s and future developments of these devices in terms of use of more than one wavelength to overcome the ambiguity problem allow to foresee new interesting applications.

  11. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  12. Modeling of neutron induced backgrounds in x-ray framing cameras

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C.; Izumi, N.; Bell, P.; Bradley, D.; Conder, A.; Eckart, M.; Khater, H.; Koch, J.; Moody, J.; Stone, G. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2010-10-15

    Fast neutrons from inertial confinement fusion implosions pose a severe background to conventional multichannel plate (MCP)-based x-ray framing cameras for deuterium-tritium yields >10{sup 13}. Nuclear reactions of neutrons in photosensitive elements (charge coupled device or film) cause some of the image noise. In addition, inelastic neutron collisions in the detector and nearby components create a large gamma pulse. The background from the resulting secondary charged particles is twofold: (1) production of light through the Cherenkov effect in optical components and by excitation of the MCP phosphor and (2) direct excitation of the photosensitive elements. We give theoretical estimates of the various contributions to the overall noise and present mitigation strategies for operating in high yield environments.

  13. The Infrared Camera for RATIR, a Rapid Response GRB Followup Instrument

    Science.gov (United States)

    Rapchun, David A.; Alardin, W.; Bigelow, B. C.; Bloom, J.; Butler, N.; Farah, A.; Fox, O. D.; Gehrels, N.; Gonzalez, J.; Klein, C.; Kutyrev, A. S.; Lotkin, G.; Morisset, C.; Moseley, S. H.; Richer, M.; Robinson, F. D.; Samuel, M. V.; Sparr, L. M.; Tucker, C.; Watson, A.

    2011-01-01

    RATIR (Reionization and Transients Infrared instrument) will be a hybrid optical/near IR imager that will utilize the "J-band dropout" to rapidly identify very high redshift (VHR) gamma-ray bursts (GRBs) from a sample of all observable Swift bursts. Our group at GSFC is developing the instrument in collaboration with UC Berkeley (UCB) and University of Mexico (UNAM). RATIR has both a visible and IR camera, which give it access to 8 bands spanning visible and IR wavelengths. The instrument implements a combination of filters and dichroics to provide the capability of performing photometry in 4 bands simultaneously. The GSFC group leads the design and construction of the instrument's IR camera, equipped with two HgCdTe 2k x 2k Teledyne detectors. The cryostat housing these detectors is cooled by a mechanical cryo-compressor, which allows uninterrupted operation on the telescope. The host 1.5-m telescope, located at the UNAM San Pedro Martir Observatory, Mexico, has recently undergone robotization, allowing for fully automated, continuous operation. After commissioning in the spring of 2011, RATIR will dedicate its time to obtaining prompt follow-up observations of GRBs and identifying VHR GRBs, thereby providing a valuable tool for studying the epoch of reionization.

  14. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  15. Determining the timeline of ultra-high speed images of the Cordin 550-32 camera

    CSIR Research Space (South Africa)

    Olivier, M

    2014-09-01

    Full Text Available diagnostic instrumentation. In such cases the synchronisation of the diagnostics is paramount. Here, the camera generated into an info.txt file is utilised to determine the time of each frame with respect to the system trigger, enabling synchronisation...

  16. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  17. Imaging Asteroid 4 Vesta Using the Framing Camera

    Science.gov (United States)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface

  18. A study of fish behaviour in the extension of a demersal trawl using a multi-compartment separator frame and SIT camera system

    DEFF Research Database (Denmark)

    Krag, Ludvig Ahm; Madsen, Niels; Karlsen, Junita

    2009-01-01

    A rigid separator frame with three vertically stacked codends was used to study fish behaviour in the extension piece of a demersal trawl. A video camera recorded fish as they encountered the separator frame. Ten hauls were conducted in a mixed species fishery in the northern North Sea. Fish...

  19. A new high-speed IR camera system

    Science.gov (United States)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  20. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  1. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  2. Flight Test Results From the Ultra High Resolution, Electro-Optical Framing Camera Containing a 9216 by 9216 Pixel, Wafer Scale, Focal Plane Array

    National Research Council Canada - National Science Library

    Mathews, Bruce; Zwicker, Theodore

    1999-01-01

    The details of the fabrication and results of laboratory testing of the Ultra High Resolution Framing Camera containing onchip forward image motion compensation were presented to the SPIE at Airborne...

  3. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Tian, Jinshou; Liu, Zhen; Fang, Yuman; Gao, Guilong; Liang, Lingliang; Wen, Wenlong

    2015-01-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility

  4. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  5. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  6. Synchronization of streak and framing camera measurements of an intense relativistic electron beam propagating through gas

    International Nuclear Information System (INIS)

    Weidman, D.J.; Murphy, D.P.; Myers, M.C.; Meger, R.A.

    1994-01-01

    The expansion of the radius of a 5 MeV, 20 kA, 40 ns electron beam from SuperIBEX during propagation through gas is being measured. The beam is generated, conditions, equilibrated, and then passed through a thin foil that produces Cherenkov light, which is recorded by a streak camera. At a second location, the beam hits another Cherenkov emitter, which is viewed by a framing camera. Measurements at these two locations can provide a time-resolved measure of the beam expansion. The two measurements, however, must be synchronized with each other, because the beam radius is not constant throughout the pulse due to variations in beam current and energy. To correlate the timing of the two diagnostics, several shots have been taken with both diagnostics viewing Cherenkov light from the same foil. Experimental measurements of the Cherenkov light from one foil viewed by both diagnostics will be presented to demonstrate the feasibility of correlating the diagnostics with each other. Streak camera data showing the optical fiducial, as well as the final correlation of the two diagnostics, will also be presented. Preliminary beam radius measurements from Cherenkov light measured at two locations will be shown

  7. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  8. Students' Framing of Laboratory Exercises Using Infrared Cameras

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  9. The moving camera in Flimmer

    DEFF Research Database (Denmark)

    Juel, Henrik

    2018-01-01

    No human actors are seen, but Flimmer still seethes with motion, both motion within the frame and motion of the frame. The subtle camera movements, perhaps at first unnoticed, play an important role in creating the poetic mood of the film, curious, playful and reflexive.......No human actors are seen, but Flimmer still seethes with motion, both motion within the frame and motion of the frame. The subtle camera movements, perhaps at first unnoticed, play an important role in creating the poetic mood of the film, curious, playful and reflexive....

  10. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  11. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  12. Accurate current synchronization trigger mode for multi-framing gated camera on YANG accelerator

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Huang Xianbin; Li Chenggang; Yang Libing; Wang Yuan; Zhang Kaizhi; Ye Yi

    2007-01-01

    The current synchronization trigger mode is important for Z-pinch experiments carried out on the YANG accelerator. The technology can solve the problem of low synchronization precision. The inherent delay time between the load current waveform and the experimental phenomenon can be adopted to obtain the synchronization trigger time. The correlative time precision about ns level can be achieved in this way. The photoelectric isolator and optical fiber are used in the synchronization trigger system to eliminate the electro-magnetic interference and many accurate measurements on the YANG accelerator can be realized. The application of this trigger mode to the multi-framing gated camera synchronization trigger system has done the trick. The evolution course of Z-pinch imploding plasma has been recorded with 3 ns exposure time and 10 ns interframing time. (authors)

  13. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    International Nuclear Information System (INIS)

    Barrera, E.; Ruiz, M.; Sanz, D.; Vega, J.; Castro, R.; Juárez, E.; Salvador, R.

    2014-01-01

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  14. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  15. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  16. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  17. PhC-4 new high-speed camera with mirror scanning

    International Nuclear Information System (INIS)

    Daragan, A.O.; Belov, B.G.

    1979-01-01

    The description of the optical system and the construction of the high-speed PhC-4 photographic camera with mirror scanning of the continuously operating type is given. The optical system of the camera is based on the foursided rotating mirror, two optical inlets and two working sectors. The PhC-4 camera provides the framing rate up to 600 thousand frames per second. (author)

  18. Hardware accelerator design for change detection in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  19. An ebCMOS camera system for marine bioluminescence observation: The LuSEApher prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dominjon, A., E-mail: a.dominjon@ipnl.in2p3.fr [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Ageron, M. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Billault, M.; Brunner, J. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Calabria, P. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Chabanat, E. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Chaize, D.; Doan, Q.T.; Guerin, C.; Houles, J.; Vagneron, L. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France)

    2012-12-11

    The ebCMOS camera, called LuSEApher, is a marine bioluminescence recorder device adapted to extreme low light level. This prototype is based on the skeleton of the LUSIPHER camera system originally developed for fluorescence imaging. It has been installed at 2500 m depth off the Mediterranean shore on the site of the ANTARES neutrino telescope. The LuSEApher camera is mounted on the Instrumented Interface Module connected to the ANTARES network for environmental science purposes (European Seas Observatory Network). The LuSEApher is a self-triggered photo detection system with photon counting ability. The presentation of the device is given and its performances such as the single photon reconstruction, noise performances and trigger strategy are presented. The first recorded movies of bioluminescence are analyzed. To our knowledge, those types of events have never been obtained with such a sensitivity and such a frame rate. We believe that this camera concept could open a new window on bioluminescence studies in the deep sea.

  20. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  1. The test beamline of the European Spallation Source - Instrumentation development and wavelength frame multiplication

    DEFF Research Database (Denmark)

    Woracek, R.; Hofmann, T.; Bulat, M.

    2016-01-01

    which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor...... wavelength band between 1.6 A and 10 A by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components....... This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects....

  2. Systems approach to the design of the CCD sensors and camera electronics for the AIA and HMI instruments on solar dynamics observatory

    Science.gov (United States)

    Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.

    2017-11-01

    Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.

  3. Temporal resolution technology of a soft X-ray picosecond framing camera based on Chevron micro-channel plates gated in cascade

    Energy Technology Data Exchange (ETDEWEB)

    Yang Wenzheng [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)], E-mail: ywz@opt.ac.cn; Bai Yonglin; Liu Baiyu [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Bai Xiaohong; Zhao Junping; Qin Junjun [Key Laboratory of Ultra-fast Photoelectric Diagnostics Technology, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)

    2009-09-11

    We describe a soft X-ray picosecond framing camera (XFC) based on Chevron micro-channel plates (MCPs) gated in cascade for ultra-fast process diagnostics. The micro-strip lines are deposited on both the input and the output surfaces of the Chevron MCPs and can be gated by a negative (positive) electric pulse on the first (second) MCP. The gating is controlled by the time delay T{sub d} between two gating pulses. By increasing T{sub d}, the temporal resolution and the gain of the camera are greatly improved compared with a single-gated MCP-XFC. The optimal T{sub d}, which results in the best temporal resolution, is within the electron transit time and transit time spread of the MCP. Using 250 ps, {+-}2.5 kV gating pulses, the temporal resolution of the double-gated Chevron MCPs camera is improved from 60 ps for the single-gated MCP-XFC to 37 ps for T{sub d}=350 ps. The principle is presented in detail and accompanied with a theoretic simulation and experimental results.

  4. Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing

    Science.gov (United States)

    Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio

    2013-01-01

    The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.

  5. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  6. High-resolution Ceres Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    Science.gov (United States)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2017-06-01

    The Dawn spacecraft Framing Camera (FC) acquired over 31,300 clear filter images of Ceres with a resolution of about 35 m/pxl during the eleven cycles in the Low Altitude Mapping Orbit (LAMO) phase between December 16 2015 and August 8 2016. We ortho-rectified the images from the first four cycles and produced a global, high-resolution, uncontrolled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 62 tiles mapped at a scale of 1:250,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page [http://dawngis.dlr.de/atlas] and will become available through the NASA Planetary Data System (PDS) (http://pdssbn.astro.umd.edu/).

  7. OBSERVATIONS OF BINARY STARS WITH THE DIFFERENTIAL SPECKLE SURVEY INSTRUMENT. I. INSTRUMENT DESCRIPTION AND FIRST RESULTS

    International Nuclear Information System (INIS)

    Horch, Elliott P.; Veillette, Daniel R.; Shah, Sagar C.; O'Rielly, Grant V.; Baena Galle, Roberto; Van Altena, William F.

    2009-01-01

    First results of a new speckle imaging system, the Differential Speckle Survey Instrument, are reported. The instrument is designed to take speckle data in two filters simultaneously with two independent CCD imagers. This feature results in three advantages over other speckle cameras: (1) twice as many frames can be obtained in the same observation time which can increase the signal-to-noise ratio for astrometric measurements, (2) component colors can be derived from a single observation, and (3) the two colors give substantial leverage over atmospheric dispersion, allowing for subdiffraction-limited separations to be measured reliably. Fifty-four observations are reported from the first use of the instrument at the Wisconsin-Indiana-Yale-NOAO 3.5 m Telescope 9 The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatories. in 2008 September, including seven components resolved for the first time. These observations are used to judge the basic capabilities of the instrument.

  8. X-ray streak and framing camera techniques

    International Nuclear Information System (INIS)

    Coleman, L.W.; Attwood, D.T.

    1975-01-01

    This paper reviews recent developments and applications of ultrafast diagnostic techniques for x-ray measurements. These techniques, based on applications of image converter devices, are already capable of significantly important resolution capabilities. Techniques capable of time resolution in the sub-nanosecond regime are being considered. Mechanical cameras are excluded from considerations as are devices using phosphors or fluors as x-ray converters

  9. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    Science.gov (United States)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  10. Stroboscope Based Synchronization of Full Frame CCD Sensors.

    Science.gov (United States)

    Shen, Liang; Feng, Xiaobing; Zhang, Yuan; Shi, Min; Zhu, Dengming; Wang, Zhaoqi

    2017-04-07

    The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD) consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equipment, the proposed approach greatly reduces the cost by using inexpensive CCD cameras and one stroboscope. The results show that our method could reach a high accuracy much better than the frame-level synchronization of traditional software methods.

  11. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  12. IMAGE ACQUISITION CONSTRAINTS FOR PANORAMIC FRAME CAMERA IMAGING

    Directory of Open Access Journals (Sweden)

    H. Kauhanen

    2012-07-01

    Full Text Available The paper describes an approach to quantify the amount of projective error produced by an offset of projection centres in a panoramic imaging workflow. We have limited this research to such panoramic workflows in which several sub-images using planar image sensor are taken and then stitched together as a large panoramic image mosaic. The aim is to simulate how large the offset can be before it introduces significant error to the dataset. The method uses geometrical analysis to calculate the error in various cases. Constraints for shooting distance, focal length and the depth of the area of interest are taken into account. Considering these constraints, it is possible to safely use even poorly calibrated panoramic camera rig with noticeable offset in projection centre locations. The aim is to create datasets suited for photogrammetric reconstruction. Similar constraints can be used also for finding recommended areas from the image planes for automatic feature matching and thus improve stitching of sub-images into full panoramic mosaics. The results are mainly designed to be used with long focal length cameras where the offset of projection centre of sub-images can seem to be significant but on the other hand the shooting distance is also long. We show that in such situations the error introduced by the offset of the projection centres results only in negligible error when stitching a metric panorama. Even if the main use of the results is with cameras of long focal length, they are feasible for all focal lengths.

  13. Stroboscope Based Synchronization of Full Frame CCD Sensors

    Directory of Open Access Journals (Sweden)

    Liang Shen

    2017-04-01

    Full Text Available The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equipment, the proposed approach greatly reduces the cost by using inexpensive CCD cameras and one stroboscope. The results show that our method could reach a high accuracy much better than the frame-level synchronization of traditional software methods.

  14. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  15. Detailed measurements and shaping of gate profiles for microchannel-plate-based X-ray framing cameras

    International Nuclear Information System (INIS)

    Landen, O.L.; Hammel, B.A.; Bell, P.M.; Abare, A.; Bradley, D.K.; Univ. of Rochester, NY

    1994-01-01

    Gated, microchannel-plate-based (MCP) framing cameras are increasingly used worldwide for x-ray imaging of subnanosecond laser-plasma phenomena. Large dynamic range (> 1,000) measurements of gain profiles for gated microchannel plates (MCP) are presented. Temporal profiles are reconstructed for any point on the microstrip transmission line from data acquired over many shots with variable delay. No evidence for significant pulse distortion by voltage reflections at the ends of the microstrip is observed. The measured profiles compare well to predictions by a time-dependent discrete dynode model down to the 1% level. The calculations do overestimate the contrast further into the temporal wings. The role of electron transit time dispersion in limiting the minimum achievable gate duration is then investigated by using variable duration flattop gating pulses. A minimum gate duration of 50 ps is achieved with flattop gating, consistent with a fractional transit time spread of ∼ 15%

  16. Four-frame gated optical imager with 120-ps resolution

    International Nuclear Information System (INIS)

    Young, P.E.; Hares, J.D.; Kilkenny, J.D.; Phillion, D.W.; Campbell, E.M.

    1988-04-01

    In this paper we describe the operation and applications of a framing camera capable of four separate two-dimensional images with each frame having a 120-ps gate width. Fast gating of a single frame is accomplished by using a wafer image intensifier tube in which the cathode is capacitively coupled to an external electrode placed outside of the photocathode of the tube. This electrode is then pulsed relative to the microchannel plate by a narrow (120 ps), high-voltage pulse. Multiple frames are obtained by using multiple gated tubes which share a single bias supply and pulser with relative gate times selected by the cable lengths between the tubes and the pulser. A beamsplitter system has been constructed which produces a separate image for each tube from a single scene. Applications of the framing camera to inertial confinement fusion experiments are discussed

  17. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    Science.gov (United States)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  18. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  19. Dynamic Artificial Potential Fields for Autonomous Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    the implementation and evaluation of Artificial Potential Fields for automatic camera placement. We first describe the re- casting of the frame composition problem as a solution to a two particles suspended in an Artificial Potential Field. We demonstrate the application of this technique to control both camera...

  20. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    Science.gov (United States)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  1. Optical flow estimation on image sequences with differently exposed frames

    Science.gov (United States)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  2. A television/still camera with common optical system for reactor inspection

    International Nuclear Information System (INIS)

    Hughes, G.; McBane, P.

    1976-01-01

    One of the problems of reactor inspection is to obtain permanent high quality records. Video recordings provide a record of poor quality but known content. Still cameras can be used but the frame content is not predictable. Efforts have been made to combine T.V. viewing to align a still camera but a simple combination does not provide the same frame size. The necessity to preset the still camera controls severely restricts the flexibility of operation. A camera has, therefore, been designed which allows a search operation using the T.V. system. When an anomaly is found the still camera controls can be remotely set, an exact record obtained and the search operation continued without removal from the reactor. An application of this camera in the environment of the blanket gas region above the sodium region in PFR at 150 0 C is described

  3. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  4. Making Molecular Movies: 10,000,000,000,000 Frames per Second

    International Nuclear Information System (INIS)

    Gaffney, Kelly

    2006-01-01

    Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second. SLAC has embarked on the construction of just such a camera. Please join me as I discuss how this molecular movie camera will work and how it will change our perception of the molecular world.

  5. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  6. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  7. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  8. Analysis of dark current images of a CMOS camera during gamma irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Czifrus, Szabolcs, E-mail: czifrus@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Kocsis, Gábor, E-mail: kocsis.gabor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Szepesi, Tamás, E-mail: szepesi.tamas@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2013-12-15

    Highlights: • Radiation tolerance of a fast framing CMOS camera EDICAM examined. • We estimate the expected gamma dose and spectrum of EDICAM with MCNP. • We irradiate EDICAM by 23.5 Gy in 70 min in a fission rector. • Dose rate normalised average brightness of frames grows linearly with the dose. • Dose normalised average brightness of frames follows the dose rate time evolution. -- Abstract: We report on the behaviour of the dark current images of the Event Detection Intelligent Camera (EDICAM) when placed into an irradiation field of gamma rays. EDICAM is an intelligent fast framing CMOS camera operating in the visible spectral range, which is designed for the video diagnostic system of the Wendelstein 7-X (W7-X) stellarator. Monte Carlo calculations were carried out in order to estimate the expected gamma spectrum and dose for an entire year of operation in W7-X. EDICAM was irradiated in a pure gamma field in the Training Reactor of BME with a dose of approximately 23.5 Gy in 1.16 h. During the irradiation, numerous frame series were taken with the camera with exposure times 20 μs, 50 μs, 100 μs, 1 ms, 10 ms, 100 ms. EDICAM withstood the irradiation, but suffered some dynamic range degradation. The behaviour of the dark current images during irradiation is described in detail. We found that the average brightness of dark current images depends on the total ionising dose that the camera is exposed to and the dose rate as well as on the applied exposure times.

  9. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  10. Final Report for LDRD Project 02-FS-009 Gigapixel Surveillance Camera

    Energy Technology Data Exchange (ETDEWEB)

    Marrs, R E; Bennett, C L

    2010-04-20

    The threats of terrorism and proliferation of weapons of mass destruction add urgency to the development of new techniques for surveillance and intelligence collection. For example, the United States faces a serious and growing threat from adversaries who locate key facilities underground, hide them within other facilities, or otherwise conceal their location and function. Reconnaissance photographs are one of the most important tools for uncovering the capabilities of adversaries. However, current imaging technology provides only infrequent static images of a large area, or occasional video of a small area. We are attempting to add a new dimension to reconnaissance by introducing a capability for large area video surveillance. This capability would enable tracking of all vehicle movements within a very large area. The goal of our project is the development of a gigapixel video surveillance camera for high altitude aircraft or balloon platforms. From very high altitude platforms (20-40 km altitude) it would be possible to track every moving vehicle within an area of roughly 100 km x 100 km, about the size of the San Francisco Bay region, with a gigapixel camera. Reliable tracking of vehicles requires a ground sampling distance (GSD) of 0.5 to 1 m and a framing rate of approximately two frames per second (fps). For a 100 km x 100 km area the corresponding pixel count is 10 gigapixels for a 1-m GSD and 40 gigapixels for a 0.5-m GSD. This is an order of magnitude beyond the 1 gigapixel camera envisioned in our LDRD proposal. We have determined that an instrument of this capacity is feasible.

  11. Gain uniformity, linearity, saturation and depletion in gated microchannel-plate x-ray framing cameras

    International Nuclear Information System (INIS)

    Landen, O.L.; Bell, P.M.; Satariano, J.J.; Oertel, J.A.; Bradley, D.K.

    1994-01-01

    The pulsed characteristics of gated, stripline configuration microchannel-plate (MCP) detectors used in X-ray framing cameras deployed on laser plasma experiments worldwide are examined in greater detail. The detectors are calibrated using short (20 ps) and long (500 ps) pulse X-ray irradiation and 3--60 ps, deep UV (202 and 213 nm), spatially-smoothed laser irradiation. Two-dimensional unsaturated gain profiles show 5 in irradiation and fitted using a discrete dynode model. Finally, a pump-probe experiment quantifying for the first time long-suspected gain depletion by strong localized irradiation was performed. The mechanism for the extra voltage and hence gain degradation is shown to be associated with intense MCP irradiation in the presence of the voltage pulse, at a fluence at least an order of magnitude above that necessary for saturation. Results obtained for both constant pump area and constant pump fluence are presented. The data are well modeled by calculating the instantaneous electrical energy loss due to the intense charge extraction at the pump site and then recalculating the gain downstream at the probe site given the pump-dependent degradation in voltage amplitude

  12. Long wavelength infrared camera (LWIRC): a 10 micron camera for the Keck Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Wishnow, E.H.; Danchi, W.C.; Tuthill, P.; Wurtz, R.; Jernigan, J.G.; Arens, J.F.

    1998-05-01

    The Long Wavelength Infrared Camera (LWIRC) is a facility instrument for the Keck Observatory designed to operate at the f/25 forward Cassegrain focus of the Keck I telescope. The camera operates over the wavelength band 7-13 {micro}m using ZnSe transmissive optics. A set of filters, a circular variable filter (CVF), and a mid-infrared polarizer are available, as are three plate scales: 0.05``, 0.10``, 0.21`` per pixel. The camera focal plane array and optics are cooled using liquid helium. The system has been refurbished with a 128 x 128 pixel Si:As detector array. The electronics readout system used to clock the array is compatible with both the hardware and software of the other Keck infrared instruments NIRC and LWS. A new pre-amplifier/A-D converter has been designed and constructed which decreases greatly the system susceptibility to noise.

  13. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    Science.gov (United States)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; hide

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  14. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.

  15. SPAD array chips with full frame readout for crystal characterization

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Peter; Blanco, Roberto; Sacco, Ilaria; Ritzert, Michael [Heidelberg University (Germany); Weyers, Sascha [Fraunhofer Institute for Microelectronic Circuits and Systems (Germany)

    2015-05-18

    We present single photon sensitive 2D camera chips containing 88x88 avalanche photo diodes which can be read out in full frame mode with up to 400.000 frames per second. The sensors have an imaging area of ~5mm x 5mm covered by square pixels of ~56µm x 56µm with a ~55% fill factor in the latest chip generation. The chips contain a self triggering logic with selectable (column) multiplicities of up to >=4 hits within an adjustable coincidence time window. The photon accumulation time window is programmable as well. First prototypes have demonstrated low dark count rates of <50kHz/mm2 (SPAD area) at 10 degree C for 10% masked pixels. One chip version contains an automated readout of the photon cluster position. The readout of the detailed photon distribution for single events allows the characterization of light sharing, optical crosstalk etc., in crystals or crystal arrays as they are used in PET instrumentation. This knowledge could lead to improvements in spatial or temporal resolution.

  16. Super-resolution processing for pulsed neutron imaging system using a high-speed camera

    International Nuclear Information System (INIS)

    Ishizuka, Ken; Kai, Tetsuya; Shinohara, Takenao; Segawa, Mariko; Mochiki, Koichi

    2015-01-01

    Super-resolution and center-of-gravity processing improve the resolution of neutron-transmitted images. These processing methods calculate the center-of-gravity pixel or sub-pixel of the neutron point converted into light by a scintillator. The conventional neutron-transmitted image is acquired using a high-speed camera by integrating many frames when a transmitted image with one frame is not provided. It succeeds in acquiring the transmitted image and calculating a spectrum by integrating frames of the same energy. However, because a high frame rate is required for neutron resonance absorption imaging, the number of pixels of the transmitted image decreases, and the resolution decreases to the limit of the camera performance. Therefore, we attempt to improve the resolution by integrating the frames after applying super-resolution or center-of-gravity processing. The processed results indicate that center-of-gravity processing can be effective in pulsed-neutron imaging with a high-speed camera. In addition, the results show that super-resolution processing is effective indirectly. A project to develop a real-time image data processing system has begun, and this system will be used at J-PARC in JAEA. (author)

  17. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence

  18. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the

  19. Single-frame 3D human pose recovery from multiple views

    NARCIS (Netherlands)

    Hofmann, M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body pose from multi-camera single-frame views. Pose recovery starts with a shape detection stage where candidate poses are generated based on hierarchical exemplar matching in the individual camera views. The hierarchy used in

  20. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  1. Navigation accuracy comparing non-covered frame and use of plastic sterile drapes to cover the reference frame in 3D acquisition.

    Science.gov (United States)

    Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa

    2017-09-01

    Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far

  2. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  3. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  4. Wide-field time-correlated single photon counting (TCSPC) microscopy with time resolution below the frame exposure time

    Energy Technology Data Exchange (ETDEWEB)

    Hirvonen, Liisa M. [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom); Petrášek, Zdeněk [Max Planck Institute of Biochemistry, Department of Cellular and Molecular Biophysics, Am Klopferspitz 18, D-82152 Martinsried (Germany); Suhling, Klaus, E-mail: klaus.suhling@kcl.ac.uk [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2015-07-01

    Fast frame rate CMOS cameras in combination with photon counting intensifiers can be used for fluorescence imaging with single photon sensitivity at kHz frame rates. We show here how the phosphor decay of the image intensifier can be exploited for accurate timing of photon arrival well below the camera exposure time. This is achieved by taking ratios of the intensity of the photon events in two subsequent frames, and effectively allows wide-field TCSPC. This technique was used for measuring decays of ruthenium compound Ru(dpp) with lifetimes as low as 1 μs with 18.5 μs frame exposure time, including in living HeLa cells, using around 0.1 μW excitation power. We speculate that by using an image intensifier with a faster phosphor decay to match a higher camera frame rate, photon arrival time measurements on the nanosecond time scale could well be possible.

  5. Robotic-surgical instrument wrist pose estimation.

    Science.gov (United States)

    Fabel, Stephan; Baek, Kyungim; Berkelman, Peter

    2010-01-01

    The Compact Lightweight Surgery Robot from the University of Hawaii includes two teleoperated instruments and one endoscope manipulator which act in accord to perform assisted interventional medicine. The relative positions and orientations of the robotic instruments and endoscope must be known to the teleoperation system so that the directions of the instrument motions can be controlled to correspond closely to the directions of the motions of the master manipulators, as seen by the the endoscope and displayed to the surgeon. If the manipulator bases are mounted in known locations and all manipulator joint variables are known, then the necessary coordinate transformations between the master and slave manipulators can be easily computed. The versatility and ease of use of the system can be increased, however, by allowing the endoscope or instrument manipulator bases to be moved to arbitrary positions and orientations without reinitializing each manipulator or remeasuring their relative positions. The aim of this work is to find the pose of the instrument end effectors using the video image from the endoscope camera. The P3P pose estimation algorithm is used with a Levenberg-Marquardt optimization to ensure convergence. The correct transformations between the master and slave coordinate frames can then be calculated and updated when the bases of the endoscope or instrument manipulators are moved to new, unknown, positions at any time before or during surgical procedures.

  6. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  7. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    International Nuclear Information System (INIS)

    Woracek, R.; Hofmann, T.; Bulat, M.; Sales, M.; Habicht, K.; Andersen, K.; Strobl, M.

    2016-01-01

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  8. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Woracek, R., E-mail: robin.woracek@esss.se [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Hofmann, T.; Bulat, M. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Sales, M. [Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark); Habicht, K. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Andersen, K. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Strobl, M. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark)

    2016-12-11

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  9. Seismic response and damage detection analyses of an instrumented steel moment-framed building

    Science.gov (United States)

    Rodgers, J.E.; Celebi, M.

    2006-01-01

    The seismic performance of steel moment-framed buildings has been of particular interest since brittle fractures were discovered at the beam-column connections in a number of buildings following the M 6.7 Northridge earthquake of January 17, 1994. A case study of the seismic behavior of an extensively instrumented 13-story steel moment frame building located in the greater Los Angeles area of California is described herein. Response studies using frequency domain, joint time-frequency, system identification, and simple damage detection analyses are performed using an extensive strong motion dataset dating from 1971 to the present, supported by engineering drawings and results of postearthquake inspections. These studies show that the building's response is more complex than would be expected from its highly symmetrical geometry. The response is characterized by low damping in the fundamental mode, larger accelerations in the middle and lower stories than at the roof and base, extended periods of vibration after the cessation of strong input shaking, beating in the response, elliptical particle motion, and significant torsion during strong shaking at the top of the concrete piers which extend from the basement to the second floor. The analyses conducted indicate that the response of the structure was elastic in all recorded earthquakes to date, including Northridge. Also, several simple damage detection methods employed did not indicate any structural damage or connection fractures. The combination of a large, real structure and low instrumentation density precluded the application of many recently proposed advanced damage detection methods in this case study. Overall, however, the findings of this study are consistent with the limited code-compliant postearthquake intrusive inspections conducted after the Northridge earthquake, which found no connection fractures or other structural damage. ?? ASCE.

  10. Vertically Integrated Edgeless Photon Imaging Camera

    Energy Technology Data Exchange (ETDEWEB)

    Fahim, Farah [Fermilab; Deptuch, Grzegorz [Fermilab; Shenai, Alpana [Fermilab; Maj, Piotr [AGH-UST, Cracow; Kmon, Piotr [AGH-UST, Cracow; Grybos, Pawel [AGH-UST, Cracow; Szczygiel, Robert [AGH-UST, Cracow; Siddons, D. Peter [Brookhaven; Rumaiz, Abdul [Brookhaven; Kuczewski, Anthony [Brookhaven; Mead, Joseph [Brookhaven; Bradford, Rebecca [Argonne; Weizeorick, John [Argonne

    2017-01-01

    The Vertically Integrated Photon Imaging Chip - Large, (VIPIC-L), is a large area, small pixel (65μm), 3D integrated, photon counting ASIC with zero-suppressed or full frame dead-time-less data readout. It features data throughput of 14.4 Gbps per chip with a full frame readout speed of 56kframes/s in the imaging mode. VIPIC-L contain 192 x 192 pixel array and the total size of the chip is 1.248cm x 1.248cm with only a 5μm periphery. It contains about 120M transistors. A 1.3M pixel camera module will be developed by arranging a 6 x 6 array of 3D VIPIC-L’s bonded to a large area silicon sensor on the analog side and to a readout board on the digital side. The readout board hosts a bank of FPGA’s, one per VIPIC-L to allow processing of up to 0.7 Tbps of raw data produced by the camera.

  11. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    Energy Technology Data Exchange (ETDEWEB)

    Roos, E. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nebeker, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-08

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF on this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.

  12. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  13. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  14. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrance, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-01-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect the vacuum vessel internal structures in both the visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diam fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35-mm Nikon F3 still camera, or (5) a 16-mm Locam II movie camera with variable framing rate up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  15. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  16. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yunsu Bok

    2014-11-01

    Full Text Available This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  17. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    Yusoff, Ahmad Razali; Ariff, Mohd Farid Mohd; Idris, Khairulnizam M; Majid, Zulkepli; Setan, Halim; Chong, Albert K

    2014-01-01

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  18. Location of frame overlap choppers on pulsed source instruments

    International Nuclear Information System (INIS)

    Narehood, D.G.; Pearce, J.V.; Sokol, P.E.

    2002-01-01

    A detailed study has been performed to investigate the effect of frame overlap in a cold neutron chopper spectrometer. The basic spectrometer is defined by two high-speed choppers, one near the moderator to shape the pulse from the moderator, and one near the sample to define energy resolution. Using ray-tracing timing diagrams, we have observed that there are regions along the guide where the trajectories of neutrons with different velocities converge temporally at characteristic points along the spectrometer. At these points of convergence, a frame overlap chopper would be totally ineffective, allowing neutrons of all velocities to pass through. Conversely, at points where trajectories of different velocity neutrons are divergent, a frame overlap chopper is most effective. An analytical model to describe this behaviour has been developed, and leads us to the counterintuitive conclusion that the optimum position for a frame overlap chopper is as close to the initial chopper as possible. We further demonstrate that detailed Monte Carlo simulations produce results which are consistent with this model

  19. Stroboscope Based Synchronization of Full Frame CCD Sensors

    OpenAIRE

    Shen, Liang; Feng, Xiaobing; Zhang, Yuan; Shi, Min; Zhu, Dengming; Wang, Zhaoqi

    2017-01-01

    The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD) consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equi...

  20. Demo : an embedded vision system for high frame rate visual servoing

    NARCIS (Netherlands)

    Ye, Z.; He, Y.; Pieters, R.S.; Mesman, B.; Corporaal, H.; Jonker, P.P.

    2011-01-01

    The frame rate of commercial off-the-shelf industrial cameras is breaking the threshold of 1000 frames-per-second, the sample rate required in high performance motion control systems. On the one hand, it enables computer vision as a cost-effective feedback source; On the other hand, it imposes

  1. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  2. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  3. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  4. System for whole body imaging and count profiling with a scintillation camera

    International Nuclear Information System (INIS)

    Kaplan, E.; Cooke, M.B.D.

    1976-01-01

    The present invention relates to a method of and apparatus for the radionuclide imaging of the whole body of a patient using an unmodified scintillation camera which permits a patient to be continuously moved under or over the stationary camera face along one axis at a time, parallel passes being made to increase the dimension of the other axis. The system includes a unique electrical circuit which makes it possible to digitally generate new matrix coordinates by summing the coordinates of a first fixed reference frame and the coordinates of a second moving reference frame. 19 claims, 7 figures

  5. Inspecting rapidly moving surfaces for small defects using CNN cameras

    Science.gov (United States)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  6. Analysis of gait using a treadmill and a Time-of-flight camera

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2009-01-01

    We present a system that analyzes human gait using a treadmill and a Time-of-flight camera. The camera provides spatial data with local intensity measures of the scene, and data are collected over several gait cycles. These data are then used to model and analyze the gait. For each frame...

  7. Applying UV cameras for SO2 detection to distant or optically thick volcanic plumes

    Science.gov (United States)

    Kern, Christoph; Werner, Cynthia; Elias, Tamar; Sutton, A. Jeff; Lübcke, Peter

    2013-01-01

    Ultraviolet (UV) camera systems represent an exciting new technology for measuring two dimensional sulfur dioxide (SO2) distributions in volcanic plumes. The high frame rate of the cameras allows the retrieval of SO2 emission rates at time scales of 1 Hz or higher, thus allowing the investigation of high-frequency signals and making integrated and comparative studies with other high-data-rate volcano monitoring techniques possible. One drawback of the technique, however, is the limited spectral information recorded by the imaging systems. Here, a framework for simulating the sensitivity of UV cameras to various SO2 distributions is introduced. Both the wavelength-dependent transmittance of the optical imaging system and the radiative transfer in the atmosphere are modeled. The framework is then applied to study the behavior of different optical setups and used to simulate the response of these instruments to volcanic plumes containing varying SO2 and aerosol abundances located at various distances from the sensor. Results show that UV radiative transfer in and around distant and/or optically thick plumes typically leads to a lower sensitivity to SO2 than expected when assuming a standard Beer–Lambert absorption model. Furthermore, camera response is often non-linear in SO2 and dependent on distance to the plume and plume aerosol optical thickness and single scatter albedo. The model results are compared with camera measurements made at Kilauea Volcano (Hawaii) and a method for integrating moderate resolution differential optical absorption spectroscopy data with UV imagery to retrieve improved SO2 column densities is discussed.

  8. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  9. A wide field X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.; Turner, M.J.L.; Willingale, R.

    1980-01-01

    A wide field of view X-ray camera based on the Dicke or Coded Mask principle is described. It is shown that this type of instrument is more sensitive than a pin-hole camera, or than a scanning survey of a given region of sky for all wide field conditions. The design of a practical camera is discussed and the sensitivity and performance of the chosen design are evaluated by means of computer simulations. The Wiener Filter and Maximum Entropy methods of deconvolution are described and these methods are compared with each other and cross-correlation using data from the computer simulations. It is shown that the analytic expressions for sensitivity used by other workers are confirmed by the simulations, and that ghost images caused by incomplete coding can be substantially eliminated by the use of the Wiener Filter and the Maximum Entropy Method, with some penalty in computer time for the latter. The cyclic mask configuration is compared with the simple mask camera. It is shown that when the diffuse X-ray background dominates, the simple system is more sensitive and has the better angular resolution. When sources dominate the simple system is less sensitive. It is concluded that the simple coded mask camera is the best instrument for wide field imaging of the X-ray sky. (orig.)

  10. Analysis of the three-dimensional trajectories of dusts observed with a stereoscopic fast framing camera in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Shoji, M., E-mail: shoji@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Masuzaki, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Tanaka, Y. [Kanazawa University, Kakuma, Kanazawa 920-1192 (Japan); Pigarov, A.Yu.; Smirnov, R.D. [University of California at San Diego, La Jolla, CA 92093 (United States); Kawamura, G.; Uesugi, Y.; Yamada, H. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan)

    2015-08-15

    The three-dimensional trajectories of dusts have been observed with two stereoscopic fast framing cameras installed in upper and outer viewports in the Large Helical Device (LHD). It shows that the dust trajectories locate in divertor legs and an ergodic layer around the main plasma confinement region. While it is found that most of the dusts approximately move along the magnetic field lines with acceleration, there are some dusts which have sharply curved trajectories crossing over the magnetic field lines. A dust transport simulation code was modified to investigate the dust trajectories in fully three dimensional geometries such as LHD plasmas. It can explain the general trend of most of observed dust trajectories by the effect of the plasma flow in the peripheral plasma. However, the behavior of the some dusts with sharply curved trajectories is not consistent with the simulations.

  11. High-Speed Videography Instrumentation And Procedures

    Science.gov (United States)

    Miller, C. E.

    1982-02-01

    High-speed videography has been an electronic analog of low-speed film cameras, but having the advantages of instant-replay and simplicity of operation. Recent advances have pushed frame-rates into the realm of the rotating prism camera. Some characteristics of videography systems are discussed in conjunction with applications in sports analysis, and with sports equipment testing.

  12. IEEE 1394 CAMERA IMAGING SYSTEM FOR BROOKHAVENS BOOSTER APPLICATION FACILITY BEAM DIAGNOSTICS

    International Nuclear Information System (INIS)

    BROWN, K.A.; FRAK, B.; GASSNER, D.; HOFF, L.; OLSEN, R.H.; SATOGATA, T.; TEPIKIAN, S.

    2002-01-01

    Brookhaven's Booster Applications Facility (BAF) will deliver resonant extracted heavy ion beams from the AGS Booster to short-exposure fixed-target experiments located at the end of the BAF beam line. The facility is designed to deliver a wide range of heavy ion species over a range of intensities from 10 3 to over 10 8 ions/pulse, and over a range of energies from 0.1 to 3.0 GeV/nucleon. With these constraints we have designed instrumentation packages which can deliver the maximum amount of dynamic range at a reasonable cost. Through the use of high quality optics systems and neutral density light filters we will achieve 4 to 5 orders of magnitude in light collection. By using digital IEEE1394 camera systems we are able to eliminate the frame-grabber stage in processing and directly transfer data at maximum rates of 400 Mb/set. In this note we give a detailed description of the system design and discuss the parameters used to develop the system specifications. We will also discuss the IEEE1394 camera software interface and the high-level user interface

  13. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Science.gov (United States)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

  14. PC-AT to gamma camera interface ANUGAMI-S

    International Nuclear Information System (INIS)

    Bhattacharya, Sadhana; Gopalakrishnan, K.R.

    1997-01-01

    PC-AT to gamma camera interface is an image acquisition system used in nuclear medicine centres and hospitals. The interface hardware and acquisition software have been designed and developed to meet most of the routine clinical applications using gamma camera. The state of the art design of the interface provides quality improvement in addition to image acquisition, by applying on-line uniformity correction which is very essential for gamma camera applications in nuclear medicine. The improvement in the quality of the image has been achieved by image acquisition in positionally varying and sliding energy window. It supports all acquisition modes viz. static, dynamic and gated acquisition modes with and without uniformity correction. The user interface provides the acquisition in various user selectable frame sizes, orientation and colour palettes. A complete emulation of camera console has been provided along with persistence scope and acquisition parameter display. It is a universal system which provides a modern, cost effective and easily maintainable solution for interfacing any gamma camera to PC or upgradation of analog gamma camera. (author). 4 refs., 3 figs

  15. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  16. On-Orbit Camera Misalignment Estimation Framework and Its Application to Earth Observation Satellite

    Directory of Open Access Journals (Sweden)

    Seungwoo Lee

    2015-03-01

    Full Text Available Despite the efforts for precise alignment of imaging sensors and attitude sensors before launch, the accuracy of pre-launch alignment is limited. The misalignment between attitude frame and camera frame is especially important as it is related to the localization error of the spacecraft, which is one of the essential factors of satellite image quality. In this paper, a framework for camera misalignment estimation is presented with its application to a high-resolution earth-observation satellite—Deimos-2. The framework intends to provide a solution for estimation and correction of the camera misalignment of a spacecraft, covering image acquisition planning to mathematical solution of camera misalignment. Considerations for effective image acquisition planning to obtain reliable results are discussed, followed by a detailed description on a practical method for extracting many GCPs automatically using reference ortho-photos. Patterns of localization errors that commonly occur due to the camera misalignment are also investigated. A mathematical model for camera misalignment estimation is described comprehensively. The results of simulation experiments showing the validity and accuracy of the misalignment estimation model are provided. The proposed framework was applied to Deimos-2. The real-world data and results from Deimos-2 are presented.

  17. Taking it all in : special camera films in 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2006-07-15

    Details of a 360-degree digital camera designed by Immersive Media Telemmersion were presented. The camera has been employed extensively in the United States for homeland security and intelligence-gathering purposes. In Canada, the cameras are now being used by the oil and gas industry. The camera has 11 lenses pointing in all directions and generates high resolution movies that can be analyzed frame-by-frame from every angle. Global positioning satellite data can be gathered during filming so that operators can pinpoint any location. The 11 video streams use more than 100 million pixels per second. After filming, the system displays synchronized, high-resolution video streams, capturing a full motion spherical world complete with directional sound. It can be viewed on a computer monitor, video screen, or head-mounted display. Pembina Pipeline Corporation recently used the Telemmersion system to plot a proposed pipeline route between Alberta's Athabasca region and Edmonton. It was estimated that more than $50,000 was saved by using the camera. The resulting video has been viewed by Pembina's engineering, environmental and geotechnical groups who were able to accurately note the route's river crossings. The cameras were also used to estimate timber salvage. Footage was then given to the operations group, to help staff familiarize themselves with the terrain, the proposed route's right-of-way, and the number of water crossings and access points. Oil and gas operators have also used the equipment on a recently acquired block of land to select well sites. 4 figs.

  18. A framed, 16-image Kirkpatrick-Baez x-ray microscope

    Science.gov (United States)

    Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.; Glebov, V. Yu.; Peng, B.; Regan, S. P.; Sangster, T. C.; Stoeckl, C.

    2017-09-01

    A 16-image Kirkpatrick-Baez (KB)-type x-ray microscope consisting of compact KB mirrors [F. J. Marshall, Rev. Sci. Instrum. 83, 10E518 (2012)] has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ˜30 ps. Images are arranged four to a strip with ˜60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ˜15 ps is achieved. A framed resolution of ˜6-μm is achieved with this combination in a 400-μm region of laser-plasma x-ray emission in the 2- to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester's OMEGA Laser System [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. The unprecedented time and spatial resolutions achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. These core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 Gbar in OMEGA cryogenic target implosions [Regan et al., Phys. Rev. Lett. 117, 025001 (2016)].

  19. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    International Nuclear Information System (INIS)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-01-01

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10 6 frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs

  20. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Kil-Byoung; Bellan, Paul M. [Applied Physics, Caltech, 1200 E. California Boulevard, Pasadena, California 91125 (United States)

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  1. Measurement of the timing behaviour of off-the-shelf cameras

    Science.gov (United States)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  2. Analyzing Gait Using a Time-of-Flight Camera

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2009-01-01

    An algorithm is created, which performs human gait analysis using spatial data and amplitude images from a Time-of-flight camera. For each frame in a sequence the camera supplies cartesian coordinates in space for every pixel. By using an articulated model the subject pose is estimated in the depth...... map in each frame. The pose estimation is based on likelihood, contrast in the amplitude image, smoothness and a shape prior used to solve a Markov random field. Based on the pose estimates, and the prior that movement is locally smooth, a sequential model is created, and a gait analysis is done...... on this model. The output data are: Speed, Cadence (steps per minute), Step length, Stride length (stride being two consecutive steps also known as a gait cycle), and Range of motion (angles of joints). The created system produces good output data of the described output parameters and requires no user...

  3. Performance characterization of UV science cameras developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-07-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-α and to detect the Hanle effect in the line core. Due to the nature of Lyman-α polarizationin the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. The CLASP cameras were designed to operate with ≤ 10 e-/pixel/second dark current, ≤ 25 e- read noise, a gain of 2.0 +- 0.5 and ≤ 1.0% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  4. A passive terahertz video camera based on lumped element kinetic inductance detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Wood, Ken [QMC Instruments Ltd., School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Grainger, William [Rutherford Appleton Laboratory, STFC, Swindon SN2 1SZ (United Kingdom); Mauskopf, Philip [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); School of Earth Science and Space Exploration, Arizona State University, Tempe, Arizona 85281 (United States); Spencer, Locke [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada)

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  5. A passive terahertz video camera based on lumped element kinetic inductance detectors

    International Nuclear Information System (INIS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-01-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  6. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    Science.gov (United States)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and

  7. The contribution to the modal analysis using an infrared camera

    Directory of Open Access Journals (Sweden)

    Dekys Vladimír

    2018-01-01

    Full Text Available The paper deals with modal analysis using an infrared camera. The test objects were excited by the modal exciter with narrowband noise and the response was registered as a frame sequence by the high speed infrared camera FLIR SC7500. The resonant frequencies and the modal shapes were determined from the infrared spectrum recordings. Lock-in technology has also been used. The experimental results were compared with calculated natural frequencies and modal shapes.

  8. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  9. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2016-03-01

    Full Text Available High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device or CMOS (complementary metal oxide semiconductor camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second gain in temporal resolution by using a 25 fps camera.

  10. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  11. What about getting physiological information into dynamic gamma camera studies

    International Nuclear Information System (INIS)

    Kiuru, A.; Nickles, R. J.; Holden, J. E.; Polcyn, R. E.

    1976-01-01

    A general technique has been developed for the multiplexing of time dependent analog signals into the individual frames of a gamma camera dynamic function study. A pulse train, frequency-modulated by the physiological signal, is capacitively coupled to the preamplifier servicing anyone of the outer phototubes of the camera head. These negative tail pulses imitate photoevents occuring at a point outside of the camera field of view, chosen to occupy a data cell in an unused corner of the computer-stored square image. By defining a region of interest around this cell, the resulting time-activity curve displays the physiological variable in temporal synchrony with the radiotracer distribution. (author)

  12. Photogrammetry of the Map Instrument in a Cryogenic Vacuum Environment

    Science.gov (United States)

    Hill, M.; Packard, E.; Pazar, R.

    2000-01-01

    MAP Instrument requirements dictated that the instruments Focal Plane Assembly (FPA) and Thermal Reflector System (TRS) maintain a high degree of structural integrity at operational temperatures (photogrammetry camera. This paper will discuss MAP's Instrument requirements, how those requirements were verified using photogrammetry, and the test setup used to provide the environment and camera movement needed to verify the instrument's requirements.

  13. Colors and Photometry of Bright Materials on Vesta as Seen by the Dawn Framing Camera

    Science.gov (United States)

    Schroeder, S. E.; Li, J.-Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.; hide

    2012-01-01

    The Dawn spacecraft has been in orbit around the asteroid Vesta since July, 2011. The on-board Framing Camera has acquired thousands of high-resolution images of the regolith-covered surface through one clear and seven narrow-band filters in the visible and near-IR wavelength range. It has observed bright and dark materials that have a range of reflectance that is unusually wide for an asteroid. Material brighter than average is predominantly found on crater walls, and in ejecta surrounding caters in the southern hemisphere. Most likely, the brightest material identified on the Vesta surface so far is located on the inside of a crater at 64.27deg S, 1.54deg . The apparent brightness of a regolith is influenced by factors such as particle size, mineralogical composition, and viewing geometry. As such, the presence of bright material can indicate differences in lithology and/or degree of space weathering. We retrieve the spectral and photometric properties of various bright terrains from false-color images acquired in the High Altitude Mapping Orbit (HAMO). We find that most bright material has a deeper 1-m pyroxene band than average. However, the aforementioned brightest material appears to have a 1-m band that is actually less deep, a result that awaits confirmation by the on-board VIR spectrometer. This site may harbor a class of material unique for Vesta. We discuss the implications of our spectral findings for the origin of bright materials.

  14. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  15. Development of nuclear imaging instrument and software

    International Nuclear Information System (INIS)

    Kim, Jang Hee; Chung Jae Myung; Nam, Sang Won; Chang, Hyung Uk

    1999-03-01

    In the medical diagnosis, the nuclear medical instrument using the radioactive isotope are commonly utilized. In the foreign countries, the medical application and development of the most advanced nuclear medical instrument such as Single Photon Emission Computer Tomography (SPECT) and position emission tomograph (PET), have been extensively carried out. However, in Korea, such highly expensive instruments have been all, imported, paying foreign currency. Since 1997, much efforts, the development of the radio nuclide medical instrument, the drive of the domestic production, etc. have been made to establish our own technologies and to balance the international payments under the support of the Ministry of Science and Technology. At present time, 180 nuclear imaging instruments are now in operation and 60 of them are analog camera. In analog camera, the vector X-Y monitor is need for are image display. Since the analog camera, signal can not be process in the digital form, we have difficulties to transfer and store the image data. The image displayed at the monitor must be stored in the form of polaroid or X ray film. In order to discard these disadvantages, if we developed the computer interface system, the performance analog camera will be comparable with that of the digital camera. The final objective of the research is that using the interface system developed in this research, we reconstruct the image data transmitted to the personal computer in the form of the generalized data file

  16. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  17. Integration of USB and firewire cameras in machine vision applications

    Science.gov (United States)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  18. Quotation and Framing

    DEFF Research Database (Denmark)

    Petersen, Nils Holger

    2010-01-01

    . In Black Angels the composer – among other well-known pieces of music – quotes the medieval dies irae sequence and the second movement of Schubert’s string quartet in D minor (D. 810). The musical and intermedial references are framed with striking modernistic sounds exploring instrumental possibilities...

  19. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with 10 e-/pixel/second dark current, 25 e- read noise, a gain of 2.0 +/- 0.5 and 1.0 percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  20. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  1. High image quality sub 100 picosecond gated framing camera development

    International Nuclear Information System (INIS)

    Price, R.H.; Wiedwald, J.D.

    1983-01-01

    A major challenge for laser fusion is the study of the symmetry and hydrodynamic stability of imploding fuel capsules. Framed x-radiographs of 10-100 ps duration, excellent image quality, minimum geometrical distortion (< 1%), dynamic range greater than 1000, and more than 200 x 200 pixels are required for this application. Recent progress on a gated proximity focused intensifier which meets these requirements is presented

  2. Measurements of plasma termination in ICRF heated long pulse discharges with fast framing cameras in the Large Helical Device

    International Nuclear Information System (INIS)

    Shoji, Mamoru; Kasahara, Hiroshi; Tanaka, Hirohiko

    2015-01-01

    The termination process of long pulse plasma discharges in the Large Helical Device (LHD) have been observed with fast framing cameras, which shows that the reason for the termination of the discharged has been changed with increased plasma heating power, improvements of plasma heating systems and change of the divertor configuration, etc. For long pulse discharges in FYs2010-2012, the main reason triggering the plasma termination was reduction of ICRF heating power with rise of iron ion emission due to electric breakdown in an ICRF antenna. In the experimental campaign in FY2013, the duration time of ICRF heated long pulse plasma discharges has been extended to about 48 minutes with a plasma heating power of ∼1.2 MW and a line-averaged electron density of ∼1.2 × 10"1"9 m"-"3. The termination of the discharges was triggered by release of large amounts of carbon dusts from closed divertor regions, indicating that the control of dust formation in the divertor regions is indispensable for extending the duration time of long pulse discharges. (author)

  3. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  4. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  5. Recording of radiation-induced optical density changes in doped agarose gels with a CCD camera

    International Nuclear Information System (INIS)

    Tarte, B.J.; Jardine, P.A.; Van Doorn, T.

    1996-01-01

    Full text: Spatially resolved dose measurement with iron-doped agarose gels is continuing to be investigated for applications in radiotherapy dosimetry. It has previously been proposed to use optical methods, rather than MRI, for dose measurement with such gels and this has been investigated using a spectrophotometer (Appleby A and Leghrouz A, Med Phys, 18:309-312, 1991). We have previously studied the use of a pencil beam laser for such optical density measurement of gels and are currently investigating charge-coupled devices (CCD) camera imaging for the same purpose but with the advantages of higher data acquisition rates and potentially greater spatial resolution. The gels used in these studies were poured, irradiated and optically analysed in Perspex casts providing gel sections 1 cm thick and up to 20 cm x 30 cm in dimension. The gels were also infused with a metal indicator dye (xylenol orange) to render the radiation induced oxidation of the iron in the gel sensitive to optical radiation, specifically in the green spectral region. Data acquisition with the CCD camera involved illumination of the irradiated gel section with a diffuse white light source, with the light from the plane of the gel section focussed to the CCD array with a manual zoom lens. The light was also filtered with a green colour glass filter to maximise the contrast between unirradiated and irradiated gels. The CCD camera (EG and G Reticon MC4013) featured a 1024 x 1024 pixel array and was interfaced to a PC via a frame grabber acquisition board with 8 bit resolution. The performance of the gel dosimeter was appraised in mapping of physical and dynamic wedged 6 MV X-ray fields. The results from the CCD camera detection system were compared with both ionisation chamber data and laser based optical density measurements of the gels. Cross beam profiles were extracted from each measurement system at a particular depth (eg. 2.3 cm for the physical wedge field) for direct comparison. A

  6. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  7. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  8. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrilla, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis

  9. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Muhammad Sajjad

    2014-02-01

    Full Text Available Visual sensor networks (VSNs usually generate a low-resolution (LR frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP. This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  10. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  11. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  12. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    Science.gov (United States)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtin, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30%) quantum efficiency at the Lyman-$\\alpha$ line. The CLASP cameras were designed to operate with =10 e- /pixel/second dark current, = 25 e- read noise, a gain of 2.0 and =0.1% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  13. X-Ray Powder Diffraction with Guinier - Haegg Focusing Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Allan

    1970-12-15

    The Guinier - Haegg focusing camera is discussed with reference to its use as an instrument for rapid phase analysis. An actual camera and the alignment procedure employed in its setting up are described. The results obtained with the instrument are compared with those obtained with Debye - Scherrer cameras and powder diffractometers. Exposure times of 15 - 30 minutes with compounds of simple structure are roughly one-sixth of those required for Debye - Scherrer patterns. Coupled with the lower background resulting from the use of a monochromatic X-ray beam, the shorter exposure time gives a ten-fold increase in sensitivity for the detection of minor phases as compared with the Debye - Scherrer camera. Attention is paid to the precautions taken to obtain reliable Bragg angles from Guinier - Haegg film measurements, with particular reference to calibration procedures. The evaluation of unit cell parameters from Guinier - Haegg data is discussed together with the application of tests for the presence of angle-dependent systematic errors. It is concluded that with proper calibration procedures and least squares treatment of the data, accuracies of the order of 0.005% are attainable. A compilation of diffraction data for a number of compounds examined in the Active Central Laboratory at Studsvik is presented to exemplify the scope of this type of powder camera.

  14. X-Ray Powder Diffraction with Guinier - Haegg Focusing Cameras

    International Nuclear Information System (INIS)

    Brown, Allan

    1970-12-01

    The Guinier - Haegg focusing camera is discussed with reference to its use as an instrument for rapid phase analysis. An actual camera and the alignment procedure employed in its setting up are described. The results obtained with the instrument are compared with those obtained with Debye - Scherrer cameras and powder diffractometers. Exposure times of 15 - 30 minutes with compounds of simple structure are roughly one-sixth of those required for Debye - Scherrer patterns. Coupled with the lower background resulting from the use of a monochromatic X-ray beam, the shorter exposure time gives a ten-fold increase in sensitivity for the detection of minor phases as compared with the Debye - Scherrer camera. Attention is paid to the precautions taken to obtain reliable Bragg angles from Guinier - Haegg film measurements, with particular reference to calibration procedures. The evaluation of unit cell parameters from Guinier - Haegg data is discussed together with the application of tests for the presence of angle-dependent systematic errors. It is concluded that with proper calibration procedures and least squares treatment of the data, accuracies of the order of 0.005% are attainable. A compilation of diffraction data for a number of compounds examined in the Active Central Laboratory at Studsvik is presented to exemplify the scope of this type of powder camera

  15. Proposed patient motion monitoring system using feature point tracking with a web camera.

    Science.gov (United States)

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  16. Nanosecond framing photography for laser-produced interstreaming plasmas

    International Nuclear Information System (INIS)

    McLean, E.A.; Ripin, B.H.; Stamper, J.A.; Manka, C.K.; Peyser, T.A.

    1988-01-01

    Using a fast-gated (120 psec-5 nsec) microchannel-plate optical camera (gated optical imager), framing photographs have been taken of the rapidly streaming laser plasma (∼ 5 x 10 7 cm/sec) passing through a vacuum or a background gas, with and without a magnetic field. Observations of Large-Larmor-Radius Interchange Instabilities are presented

  17. Event Detection Intelligent Camera: Demonstration of flexible, real-time data taking and processing

    Energy Technology Data Exchange (ETDEWEB)

    Szabolics, Tamás, E-mail: szabolics.tamas@wigner.mta.hu; Cseh, Gábor; Kocsis, Gábor; Szepesi, Tamás; Zoletnik, Sándor

    2015-10-15

    Highlights: • We present EDICAM's operation principles description. • Firmware tests results. • Software test results. • Further developments. - Abstract: An innovative fast camera (EDICAM – Event Detection Intelligent CAMera) was developed by MTA Wigner RCP in the last few years. This new concept was designed for intelligent event driven processing to be able to detect predefined events and track objects in the plasma. The camera provides a moderate frame rate of 400 Hz at full frame resolution (1280 × 1024), and readout of smaller region of interests can be done in the 1–140 kHz range even during exposure of the full image. One of the most important advantages of this hardware is a 10 Gbit/s optical link which ensures very fast communication and data transfer between the PC and the camera, enabling two level of processing: primitive algorithms in the camera hardware and high-level processing in the PC. This camera hardware has successfully proven to be able to monitoring the plasma in several fusion devices for example at ASDEX Upgrade, KSTAR and COMPASS with the first version of firmware. A new firmware and software package is under development. It allows to detect predefined events in real time and therefore the camera is capable to change its own operation or to give warnings e.g. to the safety system of the experiment. The EDICAM system can handle a huge amount of data (up to TBs) with high data rate (950 MB/s) and will be used as the central element of the 10 camera overview video diagnostic system of Wendenstein 7-X (W7-X) stellarator. This paper presents key elements of the newly developed built-in intelligence stressing the revolutionary new features and the results of the test of the different software elements.

  18. Single photon detection and localization accuracy with an ebCMOS camera

    Energy Technology Data Exchange (ETDEWEB)

    Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Dominjon, A., E-mail: agnes.dominjon@nao.ac.jp [Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France)

    2015-07-01

    The CMOS sensor technologies evolve very fast and offer today very promising solutions to existing issues facing by imaging camera systems. CMOS sensors are very attractive for fast and sensitive imaging thanks to their low pixel noise (1e-) and their possibility of backside illumination. The ebCMOS group of IPNL has produced a camera system dedicated to Low Light Level detection and based on a 640 kPixels ebCMOS with its acquisition system. After reminding the principle of detection of an ebCMOS and the characteristics of our prototype, we confront our camera to other imaging systems. We compare the identification efficiency and the localization accuracy of a point source by four different photo-detection devices: the scientific CMOS (sCMOS), the Charge Coupled Device (CDD), the Electron Multiplying CCD (emCCD) and the Electron Bombarded CMOS (ebCMOS). Our ebCMOS camera is able to identify a single photon source in less than 10 ms with a localization accuracy better than 1 µm. We report as well efficiency measurement and the false positive identification of the ebCMOS camera by identifying more than hundreds of single photon sources in parallel. About 700 spots are identified with a detection efficiency higher than 90% and a false positive percentage lower than 5. With these measurements, we show that our target tracking algorithm can be implemented in real time at 500 frames per second under a photon flux of the order of 8000 photons per frame. These results demonstrate that the ebCMOS camera concept with its single photon detection and target tracking algorithm is one of the best devices for low light and fast applications such as bioluminescence imaging, quantum dots tracking or adaptive optics.

  19. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  20. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  1. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Directory of Open Access Journals (Sweden)

    Gustavo R D Bernardina

    Full Text Available Action sport cameras (ASC are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720 and 1.5mm (1920×1080. The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  2. Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Kovácsik, Ákos, E-mail: kovacsik.akos@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Lampert, Máté, E-mail: lampert.mate@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Un Nam, Yong, E-mail: yunam@nfri.re.kr [NFRI, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon 305-806 (Korea, Republic of); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2015-01-11

    A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

  3. Performance and quality control of nuclear medicine instrumentation

    International Nuclear Information System (INIS)

    Paras, P.

    1981-01-01

    The status and the recent developments of nuclear medicine instrumentation performance, with an emphasis on gamma-camera performance, are discussed as the basis for quality control. New phantoms and techniques for the measurement of gamma-camera performance parameters are introduced and their usefulness for quality control is discussed. Tests and procedures for dose calibrator quality control are included. Also, the principles of quality control, tests, equipment and procedures for each type of instrument are reviewed, and minimum requirements for an effective quality assurance programme for nuclear medicine instrumentation are suggested. (author)

  4. Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras

    OpenAIRE

    Liu , Yanli; Granier , Xavier

    2012-01-01

    International audience; In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key ide...

  5. Design and Construction of an X-ray Lightning Camera

    Science.gov (United States)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  6. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  7. Camera Traps Can Be Heard and Seen by Animals

    Science.gov (United States)

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  8. Camera traps can be heard and seen by animals.

    Directory of Open Access Journals (Sweden)

    Paul D Meek

    Full Text Available Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5 and infrared illumination outputs (n = 7 of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21 and assessed the vision ranges (n = 3 of mammals species (where data existed to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  9. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  10. Construction of a frameless camera-based stereotactic neuronavigator.

    Science.gov (United States)

    Cornejo, A; Algorri, M E

    2004-01-01

    We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. We present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications.

  11. Universal crystal cooling device for precession cameras, rotation cameras and diffractometers

    International Nuclear Information System (INIS)

    Hajdu, J.; McLaughlin, P.J.; Helliwell, J.R.; Sheldon, J.; Thompson, A.W.

    1985-01-01

    A versatile crystal cooling device is described for macromolecular crystallographic applications in the 290 to 80 K temperature range. It utilizes a fluctuation-free cold-nitrogen-gas supply, an insulated Mylar crystal cooling chamber and a universal ball joint, which connects the cooling chamber to the goniometer head and the crystal. The ball joint is a novel feature over all previous designs. As a result, the device can be used on various rotation cameras, precession cameras and diffractometers. The lubrication of the interconnecting parts with graphite allows the cooling chamber to remain stationary while the crystal and goniometer rotate. The construction allows for 360 0 rotation of the crystal around the goniometer axis and permits any settings on the arcs and slides of the goniometer head (even if working at 80 K). There are no blind regions associated with the frame holding the chamber. Alternatively, the interconnecting ball joint can be tightened and fixed. This results in a set up similar to the construction described by Bartunik and Schubert where the cooling chamber rotates with the crystal. The flexibility of the systems allows for the use of the device on most cameras or diffractometers. THis device has been installed at the protein crystallographic stations of the Synchrotron Radiation Source at Daresbury Laboratory and in the Laboratory of Molecular Biophysics, Oxford. Several data sets have been collected with processing statistics typical of data collected without a cooling chamber. Tests using the full white beam of the synchrotron also look promising. (orig./BHO)

  12. THE EXAMPLE OF USING THE XIAOMI CAMERAS IN INVENTORY OF MONUMENTAL OBJECTS - FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2017-11-01

    Full Text Available At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II and middle-frame camera (Hasselblad-Hd4. In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  13. The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results

    Science.gov (United States)

    Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.

    2017-11-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  14. Framing Vision: An Examination of Framing, Sensegiving, and Sensemaking during a Change Initiative

    Science.gov (United States)

    Hamilton, William

    2016-01-01

    The purpose of this short article is to review the findings from an instrumental case study that examines how a college president used what this article refers to as "frame alignment processes" to mobilize internal and external support for a college initiative--one that achieved success under the current president. Specifically, I…

  15. Preflight Calibration Test Results for Optical Navigation Camera Telescope (ONC-T) Onboard the Hayabusa2 Spacecraft

    Science.gov (United States)

    Kameda, S.; Suzuki, H.; Takamatsu, T.; Cho, Y.; Yasuda, T.; Yamada, M.; Sawada, H.; Honda, R.; Morota, T.; Honda, C.; Sato, M.; Okumura, Y.; Shibasaki, K.; Ikezawa, S.; Sugita, S.

    2017-07-01

    The optical navigation camera telescope (ONC-T) is a telescopic framing camera with seven colors onboard the Hayabusa2 spacecraft launched on December 3, 2014. The main objectives of this instrument are to optically navigate the spacecraft to asteroid Ryugu and to conduct multi-band mapping the asteroid. We conducted performance tests of the instrument before its installation on the spacecraft. We evaluated the dark current and bias level, obtained data on the dependency of the dark current on the temperature of the charge-coupled device (CCD). The bias level depends strongly on the temperature of the electronics package but only weakly on the CCD temperature. The dark-reference data, which is obtained simultaneously with observation data, can be used for estimation of the dark current and bias level. A long front hood is used for ONC-T to reduce the stray light at the expense of flatness in the peripheral area of the field of view (FOV). The central area in FOV has a flat sensitivity, and the limb darkening has been measured with an integrating sphere. The ONC-T has a wheel with seven bandpass filters and a panchromatic glass window. We measured the spectral sensitivity using an integrating sphere and obtained the sensitivity of all the pixels. We also measured the point-spread function using a star simulator. Measurement results indicate that the full width at half maximum is less than two pixels for all the bandpass filters and in the temperature range expected in the mission phase except for short periods of time during touchdowns.

  16. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  17. Explosives Instrumentation Group Trial 6/77-Propellant Fire Trials (Series Two).

    Science.gov (United States)

    1981-10-01

    frames/s. A 19 mm Sony U-Matic video cassette recorder (VCR) and camera were used to view the hearth from a tower 100 m from ground-zero (GZ). Normal...camera started. This procedure permitted increased recording time of the event. A 19 mm Sony U-Matic VCR and camera was used to view the container...Lumpur, Malaysia Exchange Section, British Library, U.K. Periodicals Recording Section, Science Reference Library, British Library, U.K. Library, Chemical

  18. Explosive Transient Camera (ETC) Program

    Science.gov (United States)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  19. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  20. Frames and semi-frames

    International Nuclear Information System (INIS)

    Antoine, Jean-Pierre; Balazs, Peter

    2011-01-01

    Loosely speaking, a semi-frame is a generalized frame for which one of the frame bounds is absent. More precisely, given a total sequence in a Hilbert space, we speak of an upper (resp. lower) semi-frame if only the upper (resp. lower) frame bound is valid. Equivalently, for an upper semi-frame, the frame operator is bounded, but has an unbounded inverse, whereas a lower semi-frame has an unbounded frame operator, with a bounded inverse. We study mostly upper semi-frames, both in the continuous and discrete case, and give some remarks for the dual situation. In particular, we show that reconstruction is still possible in certain cases.

  1. Use of a color CMOS camera as a colorimeter

    Science.gov (United States)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  2. Radiometric calibration of wide-field camera system with an application in astronomy

    Science.gov (United States)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  3. High spatial resolution infrared camera as ISS external experiment

    Science.gov (United States)

    Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan

    High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.

  4. Low power multi-camera system and algorithms for automated threat detection

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  5. The AOTF-Based NO2 Camera

    Science.gov (United States)

    Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.

    2017-12-01

    In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.

  6. A CCD camera probe for a superconducting cyclotron

    International Nuclear Information System (INIS)

    Marti, F.; Blue, R.; Kuchar, J.; Nolen, J.A.; Sherrill, B.; Yurkon, J.

    1991-01-01

    The traditional internal beam probes in cyclotrons have consisted of a differential element, a wire or thin strip, and a main probe with several fingers to determine the vertical distribution of the beam. The resolution of these probes is limited, especially in the vertical direction. The authors have developed a probe for their K1200 superconducting cyclotron based on a CCD TV camera that works in a 6 T magnetic field. The camera looks at the beam spot on a scintillating screen. The TV image is processed by a frame grabber that digitizes and displays the image in pseudocolor in real time. This probe has much better resolution than traditional probes. They can see beams with total currents as low as 0.1 pA, with position resolution of about 0.05 mm

  7. A directional fast neutron detector using scintillating fibers and an intensified CCD camera system

    International Nuclear Information System (INIS)

    Holslin, Daniel; Armstrong, A.W.; Hagan, William; Shreve, David; Smith, Scott

    1994-01-01

    We have been developing and testing a scintillating fiber detector (SFD) for use as a fast neutron sensor which can discriminate against neutrons entering at angles non-parallel to the fiber axis (''directionality''). The detector/convertor component is a fiber bundle constructed of plastic scintillating fibers each measuring 10 cm long and either 0.3 mm or 0.5 mm in diameter. Extensive Monte Carlo simulations were made to optimize the bundle response to a range of fast neutron energies and to intense fluxes of high energy gamma-rays. The bundle is coupled to a set of gamma-ray insenitive electro-optic intensifiers whose output is viewed by a CCD camera directly coupled to the intensifiers. Two types of CCD cameras were utilized: 1) a standard, interline RS-170 camera with electronic shuttering and 2) a high-speed (up to 850 frame/s) field-transfer camera. Measurements of the neutron detection efficiency and directionality were made using 14 MeV neutrons, and the response to gamma-rays was performed using intense fluxes from radioisotopic sources (up to 20 R/h). Recently, the detector was constructed and tested using a large 10 cm by 10 cm square fiber bundle coupled to a 10 cm diameter GEN I intensifier tube. We present a description of the various detector systems and report the results of experimental tests. ((orig.))

  8. Establishing imaging sensor specifications for digital still cameras

    Science.gov (United States)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  9. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  10. Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera

    Science.gov (United States)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.

  11. ARNICA, the Arcetri near-infrared camera: Astronomical performance assessment.

    Science.gov (United States)

    Hunt, L. K.; Lisi, F.; Testi, L.; Baffa, C.; Borelli, S.; Maiolino, R.; Moriondo, G.; Stanga, R. M.

    1996-01-01

    The Arcetri near-infrared camera ARNICA was built as a users' instrument for the Infrared Telescope at Gornergrat (TIRGO), and is based on a 256x256 NICMOS 3 detector. In this paper, we discuss ARNICA's optical and astronomical performance at the TIRGO and at the William Herschel Telescope on La Palma. Optical performance is evaluated in terms of plate scale, distortion, point spread function, and ghosting. Astronomical performance is characterized by camera efficiency, sensitivity, and spatial uniformity of the photometry.

  12. The visible intensified cameras for plasma imaging in the TJ-II stellarator

    International Nuclear Information System (INIS)

    Cal, E. de la; Carralero, D.; Pablos, J.L. de; Alonso, A.; Rios, L.; Garcia Sanchez, P.; Hidalgo, C.

    2011-01-01

    Visible cameras are widely used in fusion experiments for diagnosis and for machine safety issues. They are generally used to monitor the plasma emission, but are also sensible to surface Blackbody radiation and Bremsstrahlung. Fast or high speed cameras capable of operating in the 10 5 frames per second speed range are today commercially available and offer the opportunity to plasma fusion researchers of two-dimensional (2D) imaging of fast phenomena such as turbulence, ELMs, disruptions, dust, etc. The tracking of these fast phenomena requires short exposure times down to the μ s range and the light intensity can be often near the signal to noise ratio limit especially in low plasma emission regions such as the far SOL Additionally, when using interference filters to monitor, e.g. impurity line emission, the photon flux is strongly reduced and the emission cannot be imaged at high speed. Therefore, the use of image intensifiers that amplify the light intensity onto the camera sensor can be of great help. The present work describes the use of intensifiers in the visible fast cameras of TJ-II stellarator. We have achieved spectroscopic plasma imaging of filtered impurity atomic line emission at short exposure times down to the 10 μ s range depending on atomic line and concentration. Additionally, plasma movies at velocities of 2x10 5 frames per second near the camera operation limit can be recorded with exposure times well below 1 μ s with sufficient signal to noise ratio. Although an increasing degradation of the image quality appears when raising the light amplification, an effective gain of up to two orders of magnitude of the light intensity is feasible for many applications (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. The visible intensified cameras for plasma imaging in the TJ-II stellarator

    Energy Technology Data Exchange (ETDEWEB)

    Cal, E. de la; Carralero, D.; Pablos, J.L. de; Alonso, A.; Rios, L.; Garcia Sanchez, P.; Hidalgo, C. (Laboratorio Nacional de Fusion, Asociacion Euratom-Ciemat, Av. Complutense 22, E-28040 Madrid)

    2011-09-15

    Visible cameras are widely used in fusion experiments for diagnosis and for machine safety issues. They are generally used to monitor the plasma emission, but are also sensible to surface Blackbody radiation and Bremsstrahlung. Fast or high speed cameras capable of operating in the 10{sup 5} frames per second speed range are today commercially available and offer the opportunity to plasma fusion researchers of two-dimensional (2D) imaging of fast phenomena such as turbulence, ELMs, disruptions, dust, etc. The tracking of these fast phenomena requires short exposure times down to the {mu} s range and the light intensity can be often near the signal to noise ratio limit especially in low plasma emission regions such as the far SOL Additionally, when using interference filters to monitor, e.g. impurity line emission, the photon flux is strongly reduced and the emission cannot be imaged at high speed. Therefore, the use of image intensifiers that amplify the light intensity onto the camera sensor can be of great help. The present work describes the use of intensifiers in the visible fast cameras of TJ-II stellarator. We have achieved spectroscopic plasma imaging of filtered impurity atomic line emission at short exposure times down to the 10 {mu} s range depending on atomic line and concentration. Additionally, plasma movies at velocities of 2x10{sup 5} frames per second near the camera operation limit can be recorded with exposure times well below 1 {mu} s with sufficient signal to noise ratio. Although an increasing degradation of the image quality appears when raising the light amplification, an effective gain of up to two orders of magnitude of the light intensity is feasible for many applications (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  14. Demonstration of the CDMA-mode CAOS smart camera.

    Science.gov (United States)

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  15. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  16. Development of low-cost high-performance multispectral camera system at Banpil

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  17. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  18. Counting neutrons with a commercial S-CMOS camera

    Science.gov (United States)

    Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega

    2018-01-01

    It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.

  19. Counting neutrons with a commercial S-CMOS camera

    Directory of Open Access Journals (Sweden)

    Patrick Van Esch

    2018-01-01

    Full Text Available It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable “neutron impact” data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has

  20. Pseudo real-time coded aperture imaging system with intensified vidicon cameras

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.

    1977-01-01

    A coded image displayed on a TV monitor was used to directly reconstruct a decoded image. Both the coded and the decoded images were viewed with intensified vidicon cameras. The coded aperture was a 15-element nonredundant pinhole array. The coding and decoding were accomplished simultaneously during the scanning of a single 16-msec TV frame

  1. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  2. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    Science.gov (United States)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-François; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-07-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  3. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, G., E-mail: giuliana.rizzo@pi.infn.it [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Batignani, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Benkechkache, M.A. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); University Constantine 1, Department of Electronics in the Science and Technology Faculty, I-25017, Constantine (Algeria); Bettarini, S.; Casarosa, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Comotti, D. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Dalla Betta, G.-F. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); TIFPA INFN, I-38123 Trento (Italy); Fabris, L. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); Forti, F. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Grassi, M.; Lodola, L.; Malcovati, P. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Manghisoni, M. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); and others

    2016-07-11

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10{sup 4} photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  4. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    International Nuclear Information System (INIS)

    Rizzo, G.; Batignani, G.; Benkechkache, M.A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G.-F.; Fabris, L.; Forti, F.; Grassi, M.; Lodola, L.; Malcovati, P.; Manghisoni, M.

    2016-01-01

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10"4 photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  5. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  6. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    Science.gov (United States)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  7. PANIC: A General-purpose Panoramic Near-infrared Camera for the Calar Alto Observatory

    Science.gov (United States)

    Cárdenas Vázquez, M.-C.; Dorner, B.; Huber, A.; Sánchez-Blanco, E.; Alter, M.; Rodríguez Gómez, J. F.; Bizenberger, P.; Naranjo, V.; Ibáñez Mengual, J.-M.; Panduro, J.; García Segura, A. J.; Mall, U.; Fernández, M.; Laun, W.; Ferro Rodríguez, I. M.; Helmling, J.; Terrón, V.; Meisenheimer, K.; Fried, J. W.; Mathar, R. J.; Baumeister, H.; Rohloff, R.-R.; Storz, C.; Verdes-Montenegro, L.; Bouy, H.; Ubierna, M.; Fopp, P.; Funke, B.

    2018-02-01

    PANIC7 is the new PAnoramic Near-Infrared Camera for Calar Alto and is a project jointly developed by the MPIA in Heidelberg, Germany, and the IAA in Granada, Spain, for the German-Spanish Astronomical Center at Calar Alto Observatory (CAHA; Almería, Spain). This new instrument works with the 2.2 m and 3.5 m CAHA telescopes covering a field of view of 30 × 30 arcmin and 15 × 15 arcmin, respectively, with a sampling of 4096 × 4096 pixels. It is designed for the spectral bands from Z to K S , and can also be equipped with narrowband filters. The instrument was delivered to the observatory in 2014 October and was commissioned at both telescopes between 2014 November and 2015 June. Science verification at the 2.2 m telescope was carried out during the second semester of 2015 and the instrument is now at full operation. We describe the design, assembly, integration, and verification process, the final laboratory tests and the PANIC instrument performance. We also present first-light data obtained during the commissioning and preliminary results of the scientific verification. The final optical model and the theoretical performance of the camera were updated according to the as-built data. The laboratory tests were made with a star simulator. Finally, the commissioning phase was done at both telescopes to validate the camera real performance on sky. The final laboratory test confirmed the expected camera performances, complying with the scientific requirements. The commissioning phase on sky has been accomplished.

  8. The wavelength frame multiplication chopper system for the ESS test beamline at the BER II reactor—A concept study of a fundamental ESS instrument principle

    International Nuclear Information System (INIS)

    Strobl, M.; Bulat, M.; Habicht, K.

    2013-01-01

    Contributing to the design update phase of the European Spallation Source ESS–scheduled to start operation in 2019–a test beamline is under construction at the BER II research reactor at Helmholtz Zentrum Berlin (HZB). This beamline offers experimental test capabilities of instrument concepts viable for the ESS. The experiments envisaged at this dedicated beamline comprise testing of components as well as of novel experimental approaches and methods taking advantage of the long pulse characteristic of the ESS source. Therefore the test beamline will be equipped with a sophisticated chopper system that provides the specific time structure of the ESS and enables variable wavelength resolutions via wavelength frame multiplication (WFM), a fundamental instrument concept beneficial for a number of instruments at ESS. We describe the unique chopper system developed for these purposes, which allows constant wavelength resolution for a wide wavelength band. Furthermore we discuss the implications for the conceptual design for related instrumentation at the ESS

  9. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  10. Space telescope phase B definition study. Volume 2A: Science instruments, f48/96 planetary camera

    Science.gov (United States)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and preliminary design of the f48/96 planetary camera for the space telescope are discussed. The camera design is for application to the axial module position of the optical telescope assembly.

  11. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    Science.gov (United States)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  12. MicroCameras and Photometers (MCP) on board the TARANIS satellite

    Science.gov (United States)

    Farges, T.; Hébert, P.; Le Mer-Dachard, F.; Ravel, K.; Gaillac, S.

    2017-12-01

    TARANIS (Tool for the Analysis of Radiations from lightNing and Sprites) is a CNES micro satellite. Its main objective is to study impulsive transfers of energy between the Earth atmosphere and the space environment. It will be sun-synchronous at an altitude of 700 km. It will be launched in 2019 for at least 2 years. Its payload is composed of several electromagnetic instruments in different wavelengths (from gamma-rays to radio waves including optical). TARANIS instruments are currently in calibration and qualification phase. The purpose is to present the MicroCameras and Photometers (MCP) design, to show its performances after its recent characterization and at last to discuss the scientific objectives and how we want to answer it with the MCP observations. The MicroCameras, developed by Sodern, are dedicated to the spatial description of TLEs and their parent lightning. They are able to differentiate sprite and lightning thanks to two narrow bands ([757-767 nm] and [772-782 nm]) that provide simultaneous pairs of images of an Event. Simulation results of the differentiation method will be shown. After calibration and tests, the MicroCameras are now delivered to the CNES for integration on the payload. The Photometers, developed by Bertin Technologies, will provide temporal measurements and spectral characteristics of TLEs and lightning. There are key instrument because of their capability to detect on-board TLEs and then switch all the instruments of the scientific payload in their high resolution acquisition mode. Photometers use four spectral bands in the [170-260 nm], [332-342 nm], [757-767 nm] and [600-900 nm] and have the same field of view as cameras. The on-board TLE detection algorithm remote-controlled parameters have been tuned before launch using the electronic board and simulated or real events waveforms. After calibration, the Photometers are now going through the environmental tests. They will be delivered to the CNES for integration on the

  13. Automatic helmet-wearing detection for law enforcement using CCTV cameras

    Science.gov (United States)

    Wonghabut, P.; Kumphong, J.; Satiennam, T.; Ung-arunyawee, R.; Leelapatra, W.

    2018-04-01

    The objective of this research is to develop an application for enforcing helmet wearing using CCTV cameras. The developed application aims to help law enforcement by police, and eventually resulting in changing risk behaviours and consequently reducing the number of accidents and its severity. Conceptually, the application software implemented using C++ language and OpenCV library uses two different angle of view CCTV cameras. Video frames recorded by the wide-angle CCTV camera are used to detect motorcyclists. If any motorcyclist without helmet is found, then the zoomed (narrow-angle) CCTV is activated to capture image of the violating motorcyclist and the motorcycle license plate in real time. Captured images are managed by database implemented using MySQL for ticket issuing. The results show that the developed program is able to detect 81% of motorcyclists on various motorcycle types during daytime and night-time. The validation results reveal that the program achieves 74% accuracy in detecting the motorcyclist without helmet.

  14. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  15. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  16. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    Science.gov (United States)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  17. 2D turbulence structure observed by a fast framing camera system in linear magnetized device PANTA

    International Nuclear Information System (INIS)

    Ohdachi, Satoshi; Inagaki, S.; Kobayashi, T.; Goto, M.

    2015-01-01

    Mesoscale structure, such as the zonal flow and the streamer plays important role in the drift-wave turbulence. The interaction of the mesoscale structure and the turbulence is not only interesting phenomena but also a key to understand the turbulence driven transport in the magnetically confined plasmas. In the cylindrical magnetized device, PANTA, the interaction of the streamer and the drift wave has been found by the bi-spectrum analysis of the turbulence. In order to study the mesoscale physics directly, the 2D turbulence is studied by a fast-framing visible camera system view from a window located at the end plate of the device. The parameters of the plasma is the following; Te∼3eV, n ∼ 1x10 19 m -3 , Ti∼0.3eV, B=900G, Neutral pressure P n =0.8 mTorr, a∼ 6cm, L=4m, Helicon source (7MHz, 3kW). Fluctuating component of the visible image is decomposed by the Fourier-Bessel expansion method. Several rotating mode is observed simultaneously. From the images, m = 1 (f∼0.7 kHz) and m = 2, 3 (f∼-3.4 kHz) components which rotate in the opposite direction can be easily distinguished. Though the modes rotate constantly in most time, there appear periods where the radially complicated node structure is formed (for example, m=3 component, t = 142.5∼6 in the figure) and coherent mode structures are disturbed. Then, a new rotating period is started again with different phase of the initial rotation until the next event happens. The typical time interval of the event is 0.5 to 1.0 times of the one rotation of the slow m = 1 mode. The wave-wave interaction might be interrupted occasionally. Detailed analysis of the turbulence using imaging technique will be discussed. (author)

  18. Underwater television camera for monitoring inner side of pressure vessel

    International Nuclear Information System (INIS)

    Takayama, Kazuhiko.

    1997-01-01

    An underwater television support device equipped with a rotatable and vertically movable underwater television camera and an underwater television camera controlling device for monitoring images of the inside of the reactor core photographed by the underwater television camera to control the position of the underwater television camera and the underwater light are disposed on an upper lattice plate of a reactor pressure vessel. Both of them are electrically connected with each other by way of a cable to rapidly observe the inside of the reactor core by the underwater television camera. The reproducibility is extremely satisfactory by efficiently concentrating the position of the camera and image information upon inspection and observation. As a result, the steps for periodical inspection can be reduced to shorten the days for the periodical inspection. Since there is no requirement to withdraw fuel assemblies over a wide reactor core region, and the device can be used with the fuel assemblies being left as they are in the reactor, it is suitable for inspection of detectors for nuclear instrumentation. (N.H.)

  19. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  20. X/Ka Celestial Frame Improvements: Vision to Reality

    Science.gov (United States)

    Jacobs, C. S.; Bagri, D. S.; Britcliffe, M. J.; Clark, J. E.; Franco, M. M.; Garcia-Miro, C.; Goodhart, C. E.; Horiuchi, S.; Lowe, S. T.; Moll, V. E.; hide

    2010-01-01

    In order to extend the International Celestial Reference Frame from its S/X-band (2.3/8.4 GHz) basis to a complementary frame at X/Ka-band (8.4/32 GHz), we began in mid-2005 an ongoing series of X/Ka observations using NASA s Deep Space Network (DSN) radio telescopes. Over the course of 47 sessions, we have detected 351 extra-galactic radio sources covering the full 24 hours of right ascension and declinations down to -45 degrees. Angular source position accuracy is at the part-per-billion level. We developed an error budget which shows that the main errors arise from limited sensitivity, mismodeling of the troposphere, uncalibrated instrumental effects, and the lack of a southern baseline. Recent work has improved sensitivity by improving pointing calibrations and by increasing the data rate four-fold. Troposphere calibration has been demonstrated at the mm-level. Construction of instrumental phase calibrators and new digital baseband filtering electronics began in recent months. We will discuss the expected effect of these improvements on the X/Ka frame.

  1. Development of X-ray CCD camera system with high readout rate using ASIC

    International Nuclear Information System (INIS)

    Nakajima, Hiroshi; Matsuura, Daisuke; Anabuki, Naohisa; Miyata, Emi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Katayama, Haruyoshi

    2009-01-01

    We report on the development of an X-ray charge-coupled device (CCD) camera system with high readout rate using application-specific integrated circuit (ASIC) and Camera Link standard. The distinctive ΔΣ type analog-to-digital converter is introduced into the chip to achieve effective noise shaping and to obtain a high resolution with relatively simple circuits. The unit test proved moderately low equivalent input noise of 70μV with a high readout pixel rate of 625 kHz, while the entire chip consumes only 100 mW. The Camera Link standard was applied for the connectivity between the camera system and frame grabbers. In the initial test of the whole system, we adopted a P-channel CCD with a thick depletion layer developed for X-ray CCD camera onboard the next Japanese X-ray astronomical satellite. The characteristic X-rays from 109 Cd were successfully read out resulting in the energy resolution of 379(±7)eV (FWHM) at 22.1 keV, that is, ΔE/E=1.7% with a readout rate of 44 kHz.

  2. Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)

    Science.gov (United States)

    Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph

    2015-01-01

    The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.

  3. Artificial frame filling using adaptive neural fuzzy inference system for particle image velocimetry dataset

    Science.gov (United States)

    Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer

    2015-03-01

    Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.

  4. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging.

    Science.gov (United States)

    Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J

    2015-02-01

    To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The

  5. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  6. Teen drivers' awareness of vehicle instrumentation in naturalistic research.

    Science.gov (United States)

    Ehsani, J P; Haynie, D; Ouimet, M C; Zhu, C; Guillaume, C; Klauer, S G; Dingus, T; Simons-Morton, B G

    2017-12-01

    Naturalistic driving methods require the installation of instruments and cameras in vehicles to record driving behavior. A critical, yet unexamined issue in naturalistic driving research is the extent to which the vehicle instruments and cameras used for naturalistic methods change human behavior. We sought to describe the degree to which teenage participants' self-reported awareness of vehicle instrumentation changes over time, and whether that awareness was associated with driving behaviors. Forty-two newly-licensed teenage drivers participated in an 18-month naturalistic driving study. Data on driving behaviors including crash/near-crashes and elevated gravitational force (g-force) events rates were collected over the study period. At the end of the study, participants were asked to rate the extent to which they were aware of instruments in the vehicle at four time points. They were also asked to describe their own and their passengers' perceptions of the instrumentation in the vehicle during an in-depth interview. The number of critical event button presses was used as a secondary measure of camera awareness. The association between self-reported awareness of the instrumentation and objectively measured driving behaviors was tested using correlations and linear mixed models. Most participants' reported that their awareness of vehicle instrumentation declined across the duration of the 18-month study. Their awareness increased in response to their passengers' concerns about the cameras or if they were involved in a crash. The number of the critical event button presses was initially high and declined rapidly. There was no correlation between driver's awareness of instrumentation and their crash and near-crash rate or elevated g-force events rate. Awareness was not associated with crash and near-crash rates or elevated g-force event rates, consistent with having no effect on this measure of driving performance. Naturalistic driving studies are likely to yield

  7. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    Science.gov (United States)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  8. Pulse-dilation enhanced gated optical imager with 5 ps resolution (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Hilsabeck, T. J.; Kilkenny, J. D. [General Atomics, P.O. Box 85608, San Diego, California 92186-5608 (United States); Hares, J. D.; Dymoke-Bradshaw, A. K. L. [Kentech Instruments Ltd., Wallingford, Oxfordshire OX10 (United Kingdom); Bell, P. M.; Koch, J. A.; Celliers, P. M.; Bradley, D. K.; McCarville, T.; Pivovaroff, M.; Soufli, R.; Bionta, R. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2010-10-15

    A 5 ps gated framing camera was demonstrated using the pulse-dilation of a drifting electron signal. The pulse-dilation is achieved by accelerating a photoelectron derived information pulse with a time varying potential [R. D. Prosser, J. Phys. E 9, 57 (1976)]. The temporal dependence of the accelerating potential causes a birth time dependent axial velocity dispersion that spreads the pulse as it transits a drift region. The expanded pulse is then imaged with a conventional gated microchannel plate based framing camera and the effective gating time of the combined instrument is reduced over that of the framing camera alone. In the drift region, electron image defocusing in the transverse or image plane is prevented with a large axial magnetic field. Details of the unique issues associated with rf excited photocathodes were investigated numerically and a prototype instrument based on this principle was recently constructed. Temporal resolution of the instrument was measured with a frequency tripled femtosecond laser operating at 266 nm. The system demonstrated 20x temporal magnification and the results are presented here. X-ray image formation strategies and photometric calculations for inertial confinement fusion implosion experiments are also examined.

  9. Resolved spectrophotometric properties of the Ceres surface from Dawn Framing Camera images

    Science.gov (United States)

    Schröder, S. E.; Mottola, S.; Carsenty, U.; Ciarniello, M.; Jaumann, R.; Li, J.-Y.; Longobardo, A.; Palmer, E.; Pieters, C.; Preusker, F.; Raymond, C. A.; Russell, C. T.

    2017-05-01

    We present a global spectrophotometric characterization of the Ceres surface using Dawn Framing Camera (FC) images. We identify the photometric model that yields the best results for photometrically correcting images. Corrected FC images acquired on approach to Ceres were assembled into global maps of albedo and color. Generally, albedo and color variations on Ceres are muted. The albedo map is dominated by a large, circular feature in Vendimia Planitia, known from HST images (Li et al., 2006), and dotted by smaller bright features mostly associated with fresh-looking craters. The dominant color variation over the surface is represented by the presence of "blue" material in and around such craters, which has a negative spectral slope over the visible wavelength range when compared to average terrain. We also mapped variations of the phase curve by employing an exponential photometric model, a technique previously applied to asteroid Vesta (Schröder et al., 2013b). The surface of Ceres scatters light differently from Vesta in the sense that the ejecta of several fresh-looking craters may be physically smooth rather than rough. High albedo, blue color, and physical smoothness all appear to be indicators of youth. The blue color may result from the desiccation of ejected material that is similar to the phyllosilicates/water ice mixtures in the experiments of Poch et al. (2016). The physical smoothness of some blue terrains would be consistent with an initially liquid condition, perhaps as a consequence of impact melting of subsurface water ice. We find red terrain (positive spectral slope) near Ernutet crater, where De Sanctis et al. (2017) detected organic material. The spectrophotometric properties of the large Vendimia Planitia feature suggest it is a palimpsest, consistent with the Marchi et al. (2016) impact basin hypothesis. The central bright area in Occator crater, Cerealia Facula, is the brightest on Ceres with an average visual normal albedo of about 0.6 at

  10. Exploring CEO’s Leadership Frames and E-Commerce Adoption among Bruneian SMEs

    Directory of Open Access Journals (Sweden)

    Afzaal H. Seyal

    2012-04-01

    Full Text Available The study examines the 250 CEOs’ leadership style in adoption of electronic commerce (EC among Bruneian SMEs. The study uses Bolman and Deals’ instrument to measure the leadership frames and found that majority (70% of the leadersare practicing all four frames and considered as effective leaders. Both human and symbolic (paired frames of leadership remains dominant.In addition, structural, human resource and symbolic frames are ranked highest among the multiple (three frames used. However, paired leadership frames (human and symbolic were found to be significan't predictor of EC adoption among Bruneian SMEs. Based upon the analysis and conclusion some recommendations were made for the relevant authorities.

  11. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Sven Fleck

    2006-12-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  12. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Fleck Sven

    2007-01-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  13. SOFIA science instruments: commissioning, upgrades and future opportunities

    Science.gov (United States)

    Smith, Erin C.; Miles, John W.; Helton, L. Andrew; Sankrit, Ravi; Andersson, B. G.; Becklin, Eric E.; De Buizer, James M.; Dowell, C. D.; Dunham, Edward W.; Güsten, Rolf; Harper, Doyal A.; Herter, Terry L.; Keller, Luke D.; Klein, Randolf; Krabbe, Alfred; Logsdon, Sarah; Marcum, Pamela M.; McLean, Ian S.; Reach, William T.; Richter, Matthew J.; Roellig, Thomas L.; Sandell, Göran; Savage, Maureen L.; Temi, Pasquale; Vacca, William D.; Vaillancourt, John E.; Van Cleve, Jeffrey E.; Young, Erick T.

    2014-07-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is the world's largest airborne observatory, featuring a 2.5 meter effective aperture telescope housed in the aft section of a Boeing 747SP aircraft. SOFIA's current instrument suite includes: FORCAST (Faint Object InfraRed CAmera for the SOFIA Telescope), a 5-40 μm dual band imager/grism spectrometer developed at Cornell University; HIPO (High-speed Imaging Photometer for Occultations), a 0.3-1.1μm imager built by Lowell Observatory; GREAT (German Receiver for Astronomy at Terahertz Frequencies), a multichannel heterodyne spectrometer from 60-240 μm, developed by a consortium led by the Max Planck Institute for Radio Astronomy; FLITECAM (First Light Infrared Test Experiment CAMera), a 1-5 μm wide-field imager/grism spectrometer developed at UCLA; FIFI-LS (Far-Infrared Field-Imaging Line Spectrometer), a 42-200 μm IFU grating spectrograph completed by University Stuttgart; and EXES (Echelon-Cross-Echelle Spectrograph), a 5-28 μm highresolution spectrometer designed at the University of Texas and being completed by UC Davis and NASA Ames Research Center. HAWC+ (High-resolution Airborne Wideband Camera) is a 50-240 μm imager that was originally developed at the University of Chicago as a first-generation instrument (HAWC), and is being upgraded at JPL to add polarimetry and new detectors developed at Goddard Space Flight Center (GSFC). SOFIA will continually update its instrument suite with new instrumentation, technology demonstration experiments and upgrades to the existing instrument suite. This paper details the current instrument capabilities and status, as well as the plans for future instrumentation.

  14. Picosecond X-ray streak camera dynamic range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Gontier, D.; Raimbourg, J.; Rubbelynck, C.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J.-P.; Goulmy, C. [Photonis SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-09-15

    Streak cameras are widely used to record the spatio-temporal evolution of laser-induced plasma. A prototype of picosecond X-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to answer the Laser MegaJoule specific needs. The dynamic range of this instrument is measured with picosecond X-ray pulses generated by the interaction of a laser beam and a copper target. The required value of 100 is reached only in the configurations combining the slowest sweeping speed and optimization of the streak tube electron throughput by an appropriate choice of high voltages applied to its electrodes.

  15. I Am Not a Camera: On Visual Politics and Method. A Response to Roy Germano

    NARCIS (Netherlands)

    Yanow, D.

    2014-01-01

    No observational method is "point and shoot." Even bracketing interpretive methodologies and their attendant philosophies, a researcher-including an experimentalist-always frames observation in terms of the topic of interest. I cannot ever be "just a camera lens," not as researcher and not as

  16. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  17. Volcano monitoring with an infrared camera: first insights from Villarrica Volcano

    Science.gov (United States)

    Rosas Sotomayor, Florencia; Amigo Ramos, Alvaro; Velasquez Vargas, Gabriela; Medina, Roxana; Thomas, Helen; Prata, Fred; Geoffroy, Carolina

    2015-04-01

    This contribution focuses on the first trials of the, almost 24/7 monitoring of Villarrica volcano with an infrared camera. Results must be compared with other SO2 remote sensing instruments such as DOAS and UV-camera, for the ''day'' measurements. Infrared remote sensing of volcanic emissions is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult, during volcanic crisis and at night time. In recent years, a ground-based infrared camera (Nicair) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. Three Nicair1 (first model) have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. Several trials with the instruments have been performed in northern Chilean volcanoes, and have proven that the intervals of retrieved SO2 concentration and fluxes are as expected. Measurements were also performed at Villarrica volcano, and a location to install a ''fixed'' camera, at 8km from the crater, was discovered here. It is a coffee house with electrical power, wifi network, polite and committed owners and a full view of the volcano summit. The first measurements are being made and processed in order to have full day and week of SO2 emissions, analyze data transfer and storage, improve the remote control of the instrument and notebook in case of breakdown, web-cam/GoPro support, and the goal of the project: which is to implement a fixed station to monitor and study the Villarrica volcano with a Nicair1 integrating and comparing these results with other remote sensing instruments. This works also looks upon the strengthen of bonds with the community by developing teaching material and giving talks to communicate volcanic hazards and other geoscience topics to the people who live "just around the corner" from one of the most active volcanoes

  18. Radiation area monitoring by wireless-communicating area monitor with surveillance camera

    International Nuclear Information System (INIS)

    Shimura, Mitsuo; Kobayashi, Hiromitsu; Kitahara, Hideki; Kobayashi, Hironobu; Okamoto, Shinji

    2004-01-01

    Aiming at a dose reduction and a work efficiency improvement for nuclear power plants that have high dose regions, we have developed our system of wireless-communicating Area Monitor with Surveillance Camera, and have performed an on-site test. Now we are implementing this Area Monitor with Surveillance Camera for a use as a TV camera in the controlled-area, which enables a personal computer to simultaneously display two or more dose values and site live images on the screen. For the radiation detector of this Area Monitor System, our wireless-communicating dosimeter is utilized. Image data are transmitted via a wireless Local Area Network (LAN). As a test result, image transmission of a maximum of 20 frames per second has been realized, which shows that this concept is a practical application. Remote-site monitoring also has been realized from an office desk located within the non-controlled area, adopting a Japan's wireless phone system, PHS (Personal Handy Phone) for the transmission interface. (author)

  19. Vibration extraction based on fast NCC algorithm and high-speed camera.

    Science.gov (United States)

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  20. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    International Nuclear Information System (INIS)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A.E.; Engelhardt, M.

    2005-01-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2x10 7 cm -2 s -1 , which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300x1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points

  1. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    Science.gov (United States)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  2. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    Science.gov (United States)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  3. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  4. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  5. Use and maintenance of nuclear medicine instruments in Southeast Asia

    International Nuclear Information System (INIS)

    1983-02-01

    Nuclear medicine instruments are rather sophisticated. They are difficult to maintain in effective working condition, especially in developing countries. The present document describes a survey conducted in Bangladesh, India, Malaysia, Pakistan, Philippines, Singapore, Sri Lanka and Thailand from October 1977 to March 1978, on the use and maintenance of nuclear medicine equipment. The survey evaluated the existing problems of instrument maintenance in the 8 countries visited. The major instruments in use were (1) scintillation probe counters, (2) well scintillation counters and (3) rectilinear cameras. Gamma camera was not widely available in the region at the time of the survey. Most of the surveyed instruments were kept in a detrimental environment resulting in a high failure rate, that caused the relatively high instrument unavailability of 11%. Insufficient bureaucratic handling of repair cases, difficulties with the supply of spare- and replacement parts and lack of training proved to be the main reasons for long periods of instrument inoperation. Remedial actions, based on the survey data, have been initiated

  6. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  7. High-frame-rate digital radiographic videography

    Science.gov (United States)

    King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott

    1994-10-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  8. Development of a portable x-ray tv camera set

    International Nuclear Information System (INIS)

    Panityotai, J.

    1990-01-01

    A portable X-ray T V camera set was developed using a 24 V battery as a power supply unit. The development aims at a non-film X-radiographic technique with low exposure radiation. The machine is able to catch one X-radiographic frame at a time with a resolution of 256 X 256 pixels under 64 gray scales. The investigation shows a horizontal resolution of 0.6 lines per millimeter and a vertical resolution of 0.7 lines per mi/limiter

  9. ACCURACY ASSESSMENT OF GO PRO HERO 3 (BLACK CAMERA IN UNDERWATER ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    P. Helmholz,

    2016-06-01

    Full Text Available Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras have become available, which often cost less than $500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black. The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm and 0.0072 mm for 12MB (for an average c of 3.642mm. The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  10. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    Science.gov (United States)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  11. 3D shape measurement for moving scenes using an interlaced scanning colour camera

    International Nuclear Information System (INIS)

    Cao, Senpeng; Cao, Yiping; Lu, Mingteng; Zhang, Qican

    2014-01-01

    A Fourier transform deinterlacing algorithm (FTDA) is proposed to eliminate the blurring and dislocation of the fringe patterns on a moving object captured by an interlaced scanning colour camera in phase measuring profilometry (PMP). Every frame greyscale fringe from three colour channels of every colour fringe is divided into even and odd field fringes respectively, each of which is respectively processed by FTDA. All of the six frames deinterlaced fringes from one colour fringe form two sets of three-step phase-shifted greyscale fringes, with which two 3D shapes corresponding to two different moments are reconstructed by PMP within a frame period. The deinterlaced fringe is identical with the exact frame fringe at the same moment theoretically. The simulation and experiments show its feasibility and validity. The method doubles the time resolution, maintains the precision of the traditional phase measurement profilometry, and has potential applications in the moving and online object’s 3D shape measurements. (paper)

  12. The Atacama Cosmology Telescope: Instrument

    Science.gov (United States)

    Thornton, Robert J.; Atacama Cosmology Telescope Team

    2010-01-01

    The 6-meter Atacama Cosmology Telescope (ACT) is making detailed maps of the Cosmic Microwave Background at Cerro Toco in northern Chile. In this talk, I focus on the design and operation of the telescope and its commissioning instrument, the Millimeter Bolometer Array Camera. The camera contains three independent sets of optics that operate at 148 GHz, 217 GHz, and 277 GHz with arcminute resolution, each of which couples to a 1024-element array of Transition Edge Sensor (TES) bolometers. I will report on the camera performance, including the beam patterns, optical efficiencies, and detector sensitivities. Under development for ACT is a new polarimeter based on feedhorn-coupled TES devices that have improved sensitivity and are planned to operate at 0.1 K.

  13. Digital quality control of the camera computer interface

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1983-01-01

    A brief description is given of how the gamma camera-computer interface works and what kind of errors can occur. Quality control tests of the interface are then described which include 1) tests of static performance e.g. uniformity, linearity, 2) tests of dynamic performance e.g. basic timing, interface count-rate, system count-rate, 3) tests of special functions e.g. gated acquisition, 4) tests of the gamma camera head, and 5) tests of the computer software. The tests described are mainly acceptance and routine tests. Many of the tests discussed are those recommended by an IAEA Advisory Group for inclusion in the IAEA control schedules for nuclear medicine instrumentation. (U.K.)

  14. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  15. Instruments to measure behavioural and psychological symptoms of dementia.

    Science.gov (United States)

    van der Linde, Rianne M; Stephan, Blossom C M; Dening, Tom; Brayne, Carol

    2014-03-01

    Reliable and valid measurement of behavioural and psychological symptoms of dementia (BPSD) is important for research and clinical practice. Here we provide an overview of the different instruments and discuss issues involved in the choice of the most appropriate instrument to measure BPSD in research. A list of BPSD instruments was generated. For each instrument Pubmed and SCOPUS were searched for articles that reported on their use or quality. Eighty-three instruments that are used to measure BPSD were identified. Instruments differ in length and detail, whether the interview is with participants, informants or by observation, the target sample and the time frames for use. Reliability and validity is generally good, but reported in few independent samples. When choosing a BPSD instrument for research the research question should be carefully scrutinised and the symptoms of interest, population, quality, detail, time frame and practical issues should be considered. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Ultrafast streak and framing technique for the observation of laser driven shock waves in transparent solid targets

    International Nuclear Information System (INIS)

    Van Kessel, C.G.M.; Sachsenmaier, P.; Sigel, R.

    1975-01-01

    Shock waves driven by laser ablation in plane transparent plexiglass and solid hydrogen targets have been observed with streak and framing techniques using a high speed image converter camera, and a dye laser as a light source. The framing pictures have been made by mode locking the dye laser and using a wide streak slit. In both materials a growing hemispherical shock wave is observed with the maximum velocity at the onset of laser radiation. (author)

  17. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  18. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  19. Instrumentation optimization for positron emission mammography

    International Nuclear Information System (INIS)

    Moses, William W.; Qi, Jinyi

    2003-01-01

    The past several years have seen designs for PET cameras optimized to image the breast, commonly known as Positron Emission Mammography or PEM cameras. The guiding principal behind PEM instrumentation is that a camera whose field of view is restricted to a single breast has higher performance and lower cost than a conventional PET camera. The most common geometry is a pair of parallel planes of detector modules, although geometries that encircle the breast have also been proposed. The ability of the detector modules to measure the depth of interaction (DOI) is also a relevant feature. This paper finds that while both the additional solid angle coverage afforded by encircling the breast and the decreased blurring afforded by the DOI measurement improve performance, the ability to measure DOI is more important than the ability to encircle the breast

  20. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    Science.gov (United States)

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  1. Framing the frame

    Directory of Open Access Journals (Sweden)

    Todd McElroy

    2007-08-01

    Full Text Available We examined how the goal of a decision task influences the perceived positive, negative valence of the alternatives and thereby the likelihood and direction of framing effects. In Study 1 we manipulated the goal to increase, decrease or maintain the commodity in question and found that when the goal of the task was to increase the commodity, a framing effect consistent with those typically observed in the literature was found. When the goal was to decrease, a framing effect opposite to the typical findings was observed whereas when the goal was to maintain, no framing effect was found. When we examined the decisions of the entire population, we did not observe a framing effect. In Study 2, we provided participants with a similar decision task except in this situation the goal was ambiguous, allowing us to observe participants' self-imposed goals and how they influenced choice preferences. The findings from Study 2 demonstrated individual variability in imposed goal and provided a conceptual replication of Study 1. %need keywords

  2. Studies on a silicon-photomultiplier-based camera for Imaging Atmospheric Cherenkov Telescopes

    Science.gov (United States)

    Arcaro, C.; Corti, D.; De Angelis, A.; Doro, M.; Manea, C.; Mariotti, M.; Rando, R.; Reichardt, I.; Tescaro, D.

    2017-12-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) represent a class of instruments which are dedicated to the ground-based observation of cosmic VHE gamma ray emission based on the detection of the Cherenkov radiation produced in the interaction of gamma rays with the Earth atmosphere. One of the key elements of such instruments is a pixelized focal-plane camera consisting of photodetectors. To date, photomultiplier tubes (PMTs) have been the common choice given their high photon detection efficiency (PDE) and fast time response. Recently, silicon photomultipliers (SiPMs) are emerging as an alternative. This rapidly evolving technology has strong potential to become superior to that based on PMTs in terms of PDE, which would further improve the sensitivity of IACTs, and see a price reduction per square millimeter of detector area. We are working to develop a SiPM-based module for the focal-plane cameras of the MAGIC telescopes to probe this technology for IACTs with large focal plane cameras of an area of few square meters. We will describe the solutions we are exploring in order to balance a competitive performance with a minimal impact on the overall MAGIC camera design using ray tracing simulations. We further present a comparative study of the overall light throughput based on Monte Carlo simulations and considering the properties of the major hardware elements of an IACT.

  3. Framing the frame

    OpenAIRE

    Todd McElroy; John J. Seta

    2007-01-01

    We examined how the goal of a decision task influences the perceived positive, negative valence of the alternatives and thereby the likelihood and direction of framing effects. In Study 1 we manipulated the goal to increase, decrease or maintain the commodity in question and found that when the goal of the task was to increase the commodity, a framing effect consistent with those typically observed in the literature was found. When the goal was to decrease, a framing effect opposite to the ty...

  4. Ultrafast gated intensifier design for laser fusion x-ray framing applications

    International Nuclear Information System (INIS)

    Price, R.H.; Wiedwald, J.D.; Kalibjian, R.; Thomas, S.W.; Cook, W.M.

    1983-01-01

    A major challenge for laser fusion is the study of the symmetry and the hydrodynamic stability of imploding fuel capsules. Streaked x-radiography, in one space and one time dimension, does not provide sufficient information. Two (spatial) dimensional frames of 10 to 100 ps duration are required with good image quality, minimum geometrical distortion (approximately 1%), dynamic range greater than 1000 and greater than 200 x 200 pixels. A gated transmission line imager (TLI) can meet these requirements with frame times between 30 and 100 ps. An instrument of this type is now being developed. Progress on this instrument including theory of operation, ultrafast pulse generation and propagation, component integration, and high resolution phosphor screen development are presented

  5. The opto-cryo-mechanical design of the short wavelength camera for the CCAT Observatory

    Science.gov (United States)

    Parshley, Stephen C.; Adams, Joseph; Nikola, Thomas; Stacey, Gordon J.

    2014-07-01

    The CCAT observatory is a 25-m class Gregorian telescope designed for submillimeter observations that will be deployed at Cerro Chajnantor (~5600 m) in the high Atacama Desert region of Chile. The Short Wavelength Camera (SWCam) for CCAT is an integral part of the observatory, enabling the study of star formation at high and low redshifts. SWCam will be a facility instrument, available at first light and operating in the telluric windows at wavelengths of 350, 450, and 850 μm. In order to trace the large curvature of the CCAT focal plane, and to suit the available instrument space, SWCam is divided into seven sub-cameras, each configured to a particular telluric window. A fully refractive optical design in each sub-camera will produce diffraction-limited images. The material of choice for the optical elements is silicon, due to its excellent transmission in the submillimeter and its high index of refraction, enabling thin lenses of a given power. The cryostat's vacuum windows double as the sub-cameras' field lenses and are ~30 cm in diameter. The other lenses are mounted at 4 K. The sub-cameras will share a single cryostat providing thermal intercepts at 80, 15, 4, 1 and 0.1 K, with cooling provided by pulse tube cryocoolers and a dilution refrigerator. The use of the intermediate temperature stage at 15 K minimizes the load at 4 K and reduces operating costs. We discuss our design requirements, specifications, key elements and expected performance of the optical, thermal and mechanical design for the short wavelength camera for CCAT.

  6. Remote removal of an obstruction from FFTF [Fast Flux Test Facility] in-service inspection camera track

    International Nuclear Information System (INIS)

    Gibbons, P.W.

    1990-11-01

    Remote techniques and special equipment were used to clear the path of a closed-circuit television camera system that travels on a monorail track around the reactor vessel support arm structure. A tangle of wire-wrapped instrumentation tubing had been inadvertently inserted through a dislocated guide-tube expansion joint and into the camera track area. An externally driven auger device, mounted on the track ahead of the camera to view the procedure, was used to retrieve the tubing. 6 figs

  7. Evaluation of tomographic ISOCAM Park II gamma camera parameters using Monte Carlo method

    International Nuclear Information System (INIS)

    Oramas Polo, Ivón

    2015-01-01

    In this paper the evaluation of tomographic ISOCAM Park II gamma camera parameters was performed using the Monte Carlo code SIMIND. The parameters uniformity, resolution and contrast were evaluated by Jaszczak phantom simulation. In addition the qualitative assessment of the center of rotation was performed. The results of the simulation are compared and evaluated against the specifications of the manufacturer of the gamma camera and taking into account the National Protocol for Quality Control of Nuclear Medicine Instruments of the Cuban Medical Equipment Control Center. A computational Jaszczak phantom model with three different distributions of activity was obtained. They can be used to perform studies with gamma cameras. (author)

  8. Development of an ultra-fast X-ray camera using hybrid pixel detectors

    International Nuclear Information System (INIS)

    Dawiec, A.

    2011-05-01

    The aim of the project whose work described in this thesis is part, was to design a high-speed X-ray camera using hybrid pixels applied to biomedical imaging and for material science. As a matter of fact the hybrid pixel technology meets the requirements of these two research fields, particularly by providing energy selection and low dose imaging capabilities. In this thesis, high frame rate X-ray imaging based on the XPAD3-S photons counting chip is presented. Within a collaboration between CPPM, ESRF and SOLEIL, three XPAD3 cameras were built. Two of them are being operated at the beamline of the ESRF and SOLEIL synchrotron facilities and the third one is embedded in the PIXSCAN II irradiation setup of CPPM. The XPAD3 camera is a large surface X-ray detector composed of eight detection modules of seven XPAD3-S chips each with a high-speed data acquisition system. The readout architecture of the camera is based on the PCI Express interface and on programmable FPGA chips. The camera achieves a readout speed of 240 images/s, with maximum number of images limited by the RAM memory of the acquisition PC. The performance of the device was characterized by carrying out several high speed imaging experiments using the PIXSCAN II irradiation setup described in the last chapter of this thesis. (author)

  9. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. P. Kersting

    2012-07-01

    Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  10. New Method for the Calibration of Multi-Camera Mobile Mapping Systems

    Science.gov (United States)

    Kersting, A. P.; Habib, A.; Rau, J.

    2012-07-01

    Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  11. First- and third-party ground truth for key frame extraction from consumer video clips

    Science.gov (United States)

    Costello, Kathleen; Luo, Jiebo

    2007-02-01

    Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.

  12. Construction Cluster Volume I [Wood Structural Framing].

    Science.gov (United States)

    Pennsylvania State Dept. of Justice, Harrisburg. Bureau of Correction.

    The document is the first of a series, to be integrated with a G.E.D. program, containing instructional materials at the basic skills level for the construction cluster. It focuses on wood structural framing and contains 20 units: (1) occupational information; (2) blueprint reading; (3) using leveling instruments and laying out building lines; (4)…

  13. Online tracking of outdoor lighting variations for augmented reality with moving cameras.

    Science.gov (United States)

    Liu, Yanli; Granier, Xavier

    2012-04-01

    In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.

  14. The status of MUSIC: the multiwavelength sub-millimeter inductance camera

    Science.gov (United States)

    Sayers, Jack; Bockstiegel, Clint; Brugger, Spencer; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Gill, Amandeep K.; Glenn, Jason; Golwala, Sunil R.; Hollister, Matthew I.; Lam, Albert; LeDuc, Henry G.; Maloney, Philip R.; Mazin, Benjamin A.; McHugh, Sean G.; Miller, David A.; Mroczkowski, Anthony K.; Noroozian, Omid; Nguyen, Hien Trong; Schlaerth, James A.; Siegel, Seth R.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2014-08-01

    The Multiwavelength Sub/millimeter Inductance Camera (MUSIC) is a four-band photometric imaging camera operating from the Caltech Submillimeter Observatory (CSO). MUSIC is designed to utilize 2304 microwave kinetic inductance detectors (MKIDs), with 576 MKIDs for each observing band centered on 150, 230, 290, and 350 GHz. MUSIC's field of view (FOV) is 14' square, and the point-spread functions (PSFs) in the four observing bands have 45'', 31'', 25'', and 22'' full-widths at half maximum (FWHM). The camera was installed in April 2012 with 25% of its nominal detector count in each band, and has subsequently completed three short sets of engineering observations and one longer duration set of early science observations. Recent results from on-sky characterization of the instrument during these observing runs are presented, including achieved map- based sensitivities from deep integrations, along with results from lab-based measurements made during the same period. In addition, recent upgrades to MUSIC, which are expected to significantly improve the sensitivity of the camera, are described.

  15. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    Science.gov (United States)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  16. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  17. Development of the monitoring system of plasma behavior using a CCD camera in the GAMMA 10 tandem mirror

    International Nuclear Information System (INIS)

    Kawano, Hirokazu; Nakashima, Yousuke; Higashizono, Yuta

    2007-01-01

    In the central-cell of the GAMMA 10 tandem mirror, a medium-speed camera (CCD camera, 400 frames per second, 216 x 640 pixel) has been installed for the observation of plasma behavior. This camera system is designed for monitoring the plasma position and movement in the whole discharge duration. The captured two-dimensional (2-D) images are automatically displayed just after the plasma shot and stored sequentially shot by shot. This system has been established as a helpful tool for optimizing the plasma production and heating systems by measuring the plasma behavior in several experimental conditions. The camera system shows that the intensity of the visible light emission on the central-cell limiter accompanied by central electron cyclotron heating (C-ECH) correlate with the wall conditioning and immersion length of a movable limiter (iris limiter) in the central cell. (author)

  18. Using a Smartphone Camera for Nanosatellite Attitude Determination

    Science.gov (United States)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  19. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  20. An Optional Instrument for European Insurance Contract Law

    Directory of Open Access Journals (Sweden)

    Helmut Heiss

    2010-08-01

    Full Text Available The Principles of European Insurance Contract Law, also referred tousing the acronym PEICL, were published in September 2009. They are the result of ten years of academic work undertaken by the"Restatement of European Insurance Contract Law" Project Group. In the time since its establishment in 1999, the project has been transformed from being a stand-alone project to a part of the CoPECL (Common Principles of European Insurance Contract Law network, drafting a specific part of the Common Frame of Reference. Having continually worked under the guiding principle that "the law of insurance [in Europe] must be one," it now represents a serious option for providing Europe with a single legal framework for insurance contracts.Despite the European Council's proclamations that the Common Frame of Reference will remain a non-binding instrument, the implementation of one or more optional instruments in the future does not appear to beimprobable considering recent developments. The possibility of anoptional instrument has been expressed more than once by the European Commission in its Action Plan and Communication on European Contract Law. Other indications in favour of an optional instrument include the European Parliament's repeated references to the Common Frame of Reference as providing, at the very least, a model for a futureoptional instrument, as well as the EESC's earlier proposal of anoptional instrument as an alternative to standardising insurancecontract law. The preparation by the EESC of another (own-initiative opinion on European contract law is underway, and its presentation is anticipated in 2010. Hence, the optional instrument is evidently the subject of serious political deliberation. Using Article 1:102, the Principles of European Insurance Contract Law represent a prototype for such an instrument.

  1. An Optional Instrument for European Insurance Contract Law

    Directory of Open Access Journals (Sweden)

    Mandeep Lakhan

    2010-08-01

    Full Text Available The Principles of European Insurance Contract Law, also referred tousing the acronym PEICL, were published in September 2009. They are the result of ten years of academic work undertaken by the"Restatement of European Insurance Contract Law" Project Group. In the time since its establishment in 1999, the project has been transformed from being a stand-alone project to a part of the CoPECL (Common Principles of European Insurance Contract Law network, drafting a specific part of the Common Frame of Reference. Having continually worked under the guiding principle that "the law of insurance [in Europe] must be one," it now represents a serious option for providing Europe with a single legal framework for insurance contracts. Despite the European Council's proclamations that the Common Frame of Reference will remain a non-binding instrument, the implementation of one or more optional instruments in the future does not appear to beimprobable considering recent developments. The possibility of anoptional instrument has been expressed more than once by the European Commission in its Action Plan and Communication on European Contract Law. Other indications in favour of an optional instrument include the European Parliament's repeated references to the Common Frame of Reference as providing, at the very least, a model for a futureoptional instrument, as well as the EESC's earlier proposal of anoptional instrument as an alternative to standardising insurancecontract law. The preparation by the EESC of another (own-initiative opinion on European contract law is underway, and its presentation is anticipated in 2010. Hence, the optional instrument is evidently the subject of serious political deliberation. Using Article 1:102, the Principles of European Insurance Contract Law represent a prototype for such an instrument.

  2. Automatic Moving Object Segmentation for Freely Moving Cameras

    Directory of Open Access Journals (Sweden)

    Yanli Wan

    2014-01-01

    Full Text Available This paper proposes a new moving object segmentation algorithm for freely moving cameras which is very common for the outdoor surveillance system, the car build-in surveillance system, and the robot navigation system. A two-layer based affine transformation model optimization method is proposed for camera compensation purpose, where the outer layer iteration is used to filter the non-background feature points, and the inner layer iteration is used to estimate a refined affine model based on the RANSAC method. Then the feature points are classified into foreground and background according to the detected motion information. A geodesic based graph cut algorithm is then employed to extract the moving foreground based on the classified features. Unlike the existing global optimization or the long term feature point tracking based method, our algorithm only performs on two successive frames to segment the moving foreground, which makes it suitable for the online video processing applications. The experiment results demonstrate the effectiveness of our algorithm in both of the high accuracy and the fast speed.

  3. Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera

    Science.gov (United States)

    Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.

    2017-12-01

    From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.

  4. Technical and instrumental prerequisites for single-port laparoscopic solo surgery: state of art.

    Science.gov (United States)

    Kim, Say-June; Lee, Sang Chul

    2015-04-21

    With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator's eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder.

  5. Onboard calibration igneous targets for the Mars Science Laboratory Curiosity rover and the Chemistry Camera laser induced breakdown spectroscopy instrument

    Energy Technology Data Exchange (ETDEWEB)

    Fabre, C., E-mail: cecile.fabre@g2r.uhp-nancy.fr [G2R, Nancy Universite (France); Maurice, S.; Cousin, A. [IRAP, Toulouse (France); Wiens, R.C. [LANL, Los Alamos, NM (United States); Forni, O. [IRAP, Toulouse (France); Sautter, V. [MNHN, Paris (France); Guillaume, D. [GET, Toulouse (France)

    2011-03-15

    Accurate characterization of the Chemistry Camera (ChemCam) laser-induced breakdown spectroscopy (LIBS) on-board composition targets is of prime importance for the ChemCam instrument. The Mars Science Laboratory (MSL) science and operations teams expect ChemCam to provide the first compositional results at remote distances (1.5-7 m) during the in situ analyses of the Martian surface starting in 2012. Thus, establishing LIBS reference spectra from appropriate calibration standards must be undertaken diligently. Considering the global mineralogy of the Martian surface, and the possible landing sites, three specific compositions of igneous targets have been determined. Picritic, noritic, and shergottic glasses have been produced, along with a Macusanite natural glass. A sample of each target will fly on the MSL Curiosity rover deck, 1.56 m from the ChemCam instrument, and duplicates are available on the ground. Duplicates are considered to be identical, as the relative standard deviation (RSD) of the composition dispersion is around 8%. Electronic microprobe and laser ablation inductively coupled plasma mass spectrometry (LA ICP-MS) analyses give evidence that the chemical composition of the four silicate targets is very homogeneous at microscopic scales larger than the instrument spot size, with RSD < 5% for concentration variations > 0.1 wt.% using electronic microprobe, and < 10% for concentration variations > 0.01 wt.% using LA ICP-MS. The LIBS campaign on the igneous targets performed under flight-like Mars conditions establishes reference spectra for the entire mission. The LIBS spectra between 240 and 900 nm are extremely rich, hundreds of lines with high signal-to-noise, and a dynamical range sufficient to identify unambiguously major, minor and trace elements. For instance, a first LIBS calibration curve has been established for strontium from [Sr] = 284 ppm to [Sr] = 1480 ppm, showing the potential for the future calibrations for other major or minor

  6. Onboard calibration igneous targets for the Mars Science Laboratory Curiosity rover and the Chemistry Camera laser induced breakdown spectroscopy instrument

    International Nuclear Information System (INIS)

    Fabre, C.; Maurice, S.; Cousin, A.; Wiens, R.C.; Forni, O.; Sautter, V.; Guillaume, D.

    2011-01-01

    Accurate characterization of the Chemistry Camera (ChemCam) laser-induced breakdown spectroscopy (LIBS) on-board composition targets is of prime importance for the ChemCam instrument. The Mars Science Laboratory (MSL) science and operations teams expect ChemCam to provide the first compositional results at remote distances (1.5-7 m) during the in situ analyses of the Martian surface starting in 2012. Thus, establishing LIBS reference spectra from appropriate calibration standards must be undertaken diligently. Considering the global mineralogy of the Martian surface, and the possible landing sites, three specific compositions of igneous targets have been determined. Picritic, noritic, and shergottic glasses have been produced, along with a Macusanite natural glass. A sample of each target will fly on the MSL Curiosity rover deck, 1.56 m from the ChemCam instrument, and duplicates are available on the ground. Duplicates are considered to be identical, as the relative standard deviation (RSD) of the composition dispersion is around 8%. Electronic microprobe and laser ablation inductively coupled plasma mass spectrometry (LA ICP-MS) analyses give evidence that the chemical composition of the four silicate targets is very homogeneous at microscopic scales larger than the instrument spot size, with RSD 0.1 wt.% using electronic microprobe, and 0.01 wt.% using LA ICP-MS. The LIBS campaign on the igneous targets performed under flight-like Mars conditions establishes reference spectra for the entire mission. The LIBS spectra between 240 and 900 nm are extremely rich, hundreds of lines with high signal-to-noise, and a dynamical range sufficient to identify unambiguously major, minor and trace elements. For instance, a first LIBS calibration curve has been established for strontium from [Sr] = 284 ppm to [Sr] = 1480 ppm, showing the potential for the future calibrations for other major or minor elements.

  7. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    Science.gov (United States)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  8. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    Science.gov (United States)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  9. Some relationship between G-frames and frames

    Directory of Open Access Journals (Sweden)

    Mehdi Rashidi-Kouchi

    2015-06-01

    Full Text Available In this paper we proved that every g-Riesz basis for Hilbert space $H$ with respect to $K$ by adding a condition is a Riesz basis for Hilbert $B(K$-module $B(H,K$. This is an extension of [A. Askarizadeh,M. A. Dehghan, {em G-frames as special frames}, Turk. J. Math., 35, (2011 1-11]. Also, we derived similar results for g-orthonormal and orthogonal bases. Some relationships between dual frame, dual g-frame and exact frame and exact g-frame are presented too.

  10. SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)

    Science.gov (United States)

    Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter

    2016-08-01

    The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This

  11. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    Science.gov (United States)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  12. SALTICAM: $0.5M acquisition camera: every big telescope should have one

    Science.gov (United States)

    O'Donoghue, Darragh; Bauermeister, Etienne; Carter, David B.; Evans, Geoffrey P.; Koorts, Willie P.; O'Connor, James; Osman, Faranah; van der Merwe, Stan; Bigelow, Bruce C.

    2003-03-01

    The Southern African Large Telescope (SALT) is a 10-m class telescope presently under construction at Sutherland in South Africa. It is designed along the lines of the Hobby-Eberly Telescope (HET) at McDonald Observatory in West Texas. SALTICAM will be the Acquisition Camera and simple Science Imager (ACSI) for this telescope. It will also function as the Verification Instrument (VI) to check the performance of the telescope during commissioning. In VI mode, SALTICAM will comprise a filter unit, shutter and cryostat with a 2x1 mosaic of 2k x 4k x 15 micron pixel CCDs. It will be mounted at the f/4.2 corrected prime focus of the telescope. In ACSI mode it will be fed by a folding flat located close to the exit pupil of the telescope. ACSI mode will have the same functional components as VI mode but it will in addition be garnished with focal conversion lenses to re-image the corrected prime focal plane at f/2. The lenses will be made from UV transmitting crystals as the wavelength range for which the instrument is designed will span 320 to 950 nm. In addition to acting as Verification Instrument and Acquisition Camera, SALTICAM will perform simple science imaging in support of other instruments, but will also have a high time resolution capability which is not widely available on large telescopes. This paper will describe the design of the instrument, emphasizing features of particular interest.

  13. High speed motion-picture photography. Instrumentation and application

    International Nuclear Information System (INIS)

    Bertin-Maghit, G.; Delli, C.; Falgayrettes, M.

    1981-01-01

    Filming technology at 5,000 frames/second is presented in this paper for the determination of the volume and the expension speed of a gas bubble in water. The high speed 16 mm movie camera, fitted with ultra-wide angle lenses, is placed in front of a side light facing the bubble. Ten 60 ms fast flashes, released in succession, illuminate the bubble [fr

  14. A New Instrument for the IRTF: the MIT Optical Rapid Imaging System (MORIS)

    Science.gov (United States)

    Gulbis, Amanda A. S.; Elliot, J. L.; Rojas, F. E.; Bus, S. J.; Rayner, J. T.; Stahlberger, W. E.; Tokunaga, A. T.; Adams, E. R.; Person, M. J.

    2010-10-01

    NASA's 3-m Infrared Telescope Facility (IRTF) on Mauna Kea, HI plays a leading role in obtaining planetary science observations. However, there has been no capability for high-speed, visible imaging from this telescope. Here we present a new IRTF instrument, MORIS, the MIT Optical Rapid Imaging System. MORIS is based on POETS (Portable Occultation Eclipse and Transit Systems; Souza et al., 2006, PASP, 118, 1550). Its primary component is an Andor iXon camera, a 512x512 array of 16-micron pixels with high quantum efficiency, low read noise, low dark current, and full-frame readout rates of between 3.5 Hz (6 e /pixel read noise) and 35 Hz (49 e /pixel read noise at electron-multiplying gain=1). User-selectable binning and subframing can increase the cadence to a few hundred Hz. An electron-multiplying mode can be employed for photon counting, effectively reducing the read noise to sub-electron levels at the expense of dynamic range. Data cubes, or individual frames, can be triggered to nanosecond accuracy using a GPS. MORIS is mounted on the side-facing widow of SpeX (Rayner et al. 2003, PASP, 115, 362), allowing simultaneous near-infrared and visible observations. The mounting box contains 3:1 reducing optics to produce a 60 arcsec x 60 arcsec field of view at f/12.7. It hosts a ten-slot filter wheel, with Sloan g×, r×, i×, and z×, VR, Johnson V, and long-pass red filters. We describe the instrument design, components, and measured characteristics. We report results from the first science observations, a 24 June 2008 stellar occultation by Pluto. We also discuss a recent overhaul of the optical path, performed in order to eliminate scattered light. This work is supported in part by NASA Planetary Major Equipment grant NNX07AK95G. We are indebted to the University of Hawai'i Institute for Astronomy machine shop, in particular Randy Chung, for fabricating instrument components.

  15. Infrared Camera Diagnostic for Heat Flux Measurements on NSTX

    International Nuclear Information System (INIS)

    D. Mastrovito; R. Maingi; H.W. Kugel; A.L. Roquemore

    2003-01-01

    An infrared imaging system has been installed on NSTX (National Spherical Torus Experiment) at the Princeton Plasma Physics Laboratory to measure the surface temperatures on the lower divertor and center stack. The imaging system is based on an Indigo Alpha 160 x 128 microbolometer camera with 12 bits/pixel operating in the 7-13 (micro)m range with a 30 Hz frame rate and a dynamic temperature range of 0-700 degrees C. From these data and knowledge of graphite thermal properties, the heat flux is derived with a classic one-dimensional conduction model. Preliminary results of heat flux scaling are reported

  16. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    Science.gov (United States)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  17. Quality assurance of imaging instruments for nuclear medicine

    International Nuclear Information System (INIS)

    Sera, T.; Csernay, L.

    1993-01-01

    Advanced quality control and assurance techniques for imaging instrumentation used in medical diagnosis are overviewed. The measurement systems for the homogeneity, linearity, geometrical resolution, energy resolution, sensitivity and pulse yield output of gamma camera detectors are presented in detail. The two most important quality control standards, the National Electrical Manufacturers' Association (NEMA) and the International Atomic Energy Agency standards and tests are described. Their use in gamma camera calibration is proposed. (R.P.) 22 refs.; 1 tabs

  18. A quantitative preliminary evaluation of nuclear medicine instruments in the Philippines

    International Nuclear Information System (INIS)

    Valdezco, E.M.; Caseria, E.S.; Lopez, L.B.; Pasion, I.S.; Linilitan, V.E.

    1986-01-01

    This paper is the result of a survey conducted on several nuclear medicine centers in Metro Manila including one in Baguio City to assess the performance of nuclear medicine instruments and the extent of quality procedures being carried out. It was revealed that prompt and competent service seems to be a major problem. Of the eleven sites visited, 4 have cameras only, 4 with cameras with computers, 3 with rectilinear scanners only and 1 with cameras + rectilinear scanners. (ELC) 8 figs

  19. Status of the Dark Energy Survey Camera (DECam) Project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; Abbott, Timothy M.C.; Angstadt, Robert; Annis, Jim; Antonik, Michelle, L.; Bailey, Jim; Ballester, Otger.; Bernstein, Joseph P.; Bernstein, Rebbeca; Bonati, Marco; Bremer, Gale; /Fermilab /Cerro-Tololo InterAmerican Obs. /ANL /Texas A-M /Michigan U. /Illinois U., Urbana /Ohio State U. /University Coll. London /LBNL /SLAC /IFAE

    2012-06-29

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  20. Status of the Dark Energy Survey Camera (DECam) project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; McLean, Ian S.; Ramsay, Suzanne K.; Abbott, Timothy M. C.; Angstadt, Robert; Takami, Hideki; Annis, Jim; Antonik, Michelle L.; Bailey, Jim; Ballester, Otger; Bernstein, Joseph P.; Bernstein, Rebecca A.; Bonati, Marco; Bremer, Gale; Briones, Jorge; Brooks, David; Buckley-Geer, Elizabeth J.; Campa, Juila; Cardiel-Sas, Laia; Castander, Francisco; Castilla, Javier; Cease, Herman; Chappa, Steve; Chi, Edward C.; da Costa, Luis; DePoy, Darren L.; Derylo, Gregory; de Vincente, Juan; Diehl, H. Thomas; Doel, Peter; Estrada, Juan; Eiting, Jacob; Elliott, Anne E.; Finley, David A.; Flores, Rolando; Frieman, Josh; Gaztanaga, Enrique; Gerdes, David; Gladders, Mike; Guarino, V.; Gutierrez, G.; Grudzinski, Jim; Hanlon, Bill; Hao, Jiangang; Holland, Steve; Honscheid, Klaus; Huffman, Dave; Jackson, Cheryl; Jonas, Michelle; Karliner, Inga; Kau, Daekwang; Kent, Steve; Kozlovsky, Mark; Krempetz, Kurt; Krider, John; Kubik, Donna; Kuehn, Kyler; Kuhlmann, Steve E.; Kuk, Kevin; Lahav, Ofer; Langellier, Nick; Lathrop, Andrew; Lewis, Peter M.; Lin, Huan; Lorenzon, Wolfgang; Martinez, Gustavo; McKay, Timothy; Merritt, Wyatt; Meyer, Mark; Miquel, Ramon; Morgan, Jim; Moore, Peter; Moore, Todd; Neilsen, Eric; Nord, Brian; Ogando, Ricardo; Olson, Jamieson; Patton, Kenneth; Peoples, John; Plazas, Andres; Qian, Tao; Roe, Natalie; Roodman, Aaron; Rossetto, B.; Sanchez, E.; Soares-Santos, Marcelle; Scarpine, Vic; Schalk, Terry; Schindler, Rafe; Schmidt, Ricardo; Schmitt, Richard; Schubnell, Mike; Schultz, Kenneth; Selen, M.; Serrano, Santiago; Shaw, Terri; Simaitis, Vaidas; Slaughter, Jean; Smith, R. Christopher; Spinka, Hal; Stefanik, Andy; Stuermer, Walter; Sypniewski, Adam; Talaga, R.; Tarle, Greg; Thaler, Jon; Tucker, Doug; Walker, Alistair R.; Weaverdyck, Curtis; Wester, William; Woods, Robert J.; Worswick, Sue; Zhao, Allen

    2012-09-24

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  1. Status of MUSIC, the MUltiwavelength Sub/millimeter Inductance Camera

    Science.gov (United States)

    Golwala, Sunil R.; Bockstiegel, Clint; Brugger, Spencer; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran; Gao, Jiansong; Gill, Amandeep K.; Glenn, Jason; Hollister, Matthew I.; LeDuc, Henry G.; Maloney, Philip R.; Mazin, Benjamin A.; McHugh, Sean G.; Miller, David; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Schlaerth, James A.; Siegel, Seth; Vayonakis, Anastasios K.; Wilson, Philip R.; Zmuidzinas, Jonas

    2012-09-01

    We present the status of MUSIC, the MUltiwavelength Sub/millimeter Inductance Camera, a new instrument for the Caltech Submillimeter Observatory. MUSIC is designed to have a 14', diffraction-limited field-of-view instrumented with 2304 detectors in 576 spatial pixels and four spectral bands at 0.87, 1.04, 1.33, and 1.98 mm. MUSIC will be used to study dusty star-forming galaxies, galaxy clusters via the Sunyaev-Zeldovich effect, and star formation in our own and nearby galaxies. MUSIC uses broadband superconducting phased-array slot-dipole antennas to form beams, lumpedelement on-chip bandpass filters to define spectral bands, and microwave kinetic inductance detectors to sense incoming light. The focal plane is fabricated in 8 tiles consisting of 72 spatial pixels each. It is coupled to the telescope via an ambient-temperature ellipsoidal mirror and a cold reimaging lens. A cold Lyot stop sits at the image of the primary mirror formed by the ellipsoidal mirror. Dielectric and metal-mesh filters are used to block thermal infrared and out-ofband radiation. The instrument uses a pulse tube cooler and 3He/ 3He/4He closed-cycle cooler to cool the focal plane to below 250 mK. A multilayer shield attenuates Earth's magnetic field. Each focal plane tile is read out by a single pair of coaxes and a HEMT amplifier. The readout system consists of 16 copies of custom-designed ADC/DAC and IF boards coupled to the CASPER ROACH platform. We focus on recent updates on the instrument design and results from the commissioning of the full camera in 2012.

  2. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  3. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  4. Performance Characterization of the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) CCD Cameras

    Science.gov (United States)

    Joiner, R. K.; Kobayashi, K.; Winebarger, A. R.; Champey, P. R.

    2014-12-01

    The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a sounding rocket instrument which is currently being developed by NASA's Marshall Space Flight Center (MSFC) and the National Astronomical Observatory of Japan (NAOJ). The goal of this instrument is to observe and detect the Hanle effect in the scattered Lyman-Alpha UV (121.6nm) light emitted by the Sun's Chromosphere to make measurements of the magnetic field in this region. In order to make accurate measurements of this effect, the performance characteristics of the three on-board charge-coupled devices (CCDs) must meet certain requirements. These characteristics include: quantum efficiency, gain, dark current, noise, and linearity. Each of these must meet predetermined requirements in order to achieve satisfactory performance for the mission. The cameras must be able to operate with a gain of no greater than 2 e-/DN, a noise level less than 25e-, a dark current level which is less than 10e-/pixel/s, and a residual non-linearity of less than 1%. Determining these characteristics involves performing a series of tests with each of the cameras in a high vacuum environment. Here we present the methods and results of each of these performance tests for the CLASP flight cameras.

  5. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  6. Capstan to be used with a camera for rapid cycling bubble chambers

    CERN Document Server

    CERN PhotoLab

    1978-01-01

    To achieve the high speed film transport required for high camera rate (15 and 25 Hz, for LEBC and RCBC respectively) a new drive mechanism was developed, which moved the frames (up to about 110 mm x 90 mm) by rotating a capstan stepwise through 60 deg, to bring the next face into position for photography (see also photo 7801001). Details are given for instance in J.L. Benichou et al. Nucl. Instrum. Methods 190 (1981) 487

  7. Evaluation of Large-Scale Wing Vortex Wakes from Multi-Camera PIV Measurements in Free-Flight Laboratory

    Science.gov (United States)

    Carmer, Carl F. v.; Heider, André; Schröder, Andreas; Konrath, Robert; Agocs, Janos; Gilliot, Anne; Monnier, Jean-Claude

    Multiple-vortex systems of aircraft wakes have been investigated experimentally in a unique large-scale laboratory facility, the free-flight B20 catapult bench, ONERA Lille. 2D/2C PIV measurements have been performed in a translating reference frame, which provided time-resolved crossvelocity observations of the vortex systems in a Lagrangian frame normal to the wake axis. A PIV setup using a moving multiple-camera array and a variable double-frame time delay has been employed successfully. The large-scale quasi-2D structures of the wake-vortex system have been identified using the QW criterion based on the 2D velocity gradient tensor ∇H u, thus illustrating the temporal development of unequal-strength corotating vortex pairs in aircraft wakes for nondimensional times tU0/b≲45.

  8. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  9. Distributed Framework for Dynamic Telescope and Instrument Control

    Science.gov (United States)

    Ames, Troy J.; Case, Lynne

    2002-01-01

    Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see httD://www.jxta.org,) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a device's IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a Principal Investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have

  10. SHOK—The First Russian Wide-Field Optical Camera in Space

    Science.gov (United States)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  11. On transforms between Gabor frames and wavelet frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Goh, Say Song

    2013-01-01

    We describe a procedure that enables us to construct dual pairs of wavelet frames from certain dual pairs of Gabor frames. Applying the construction to Gabor frames generated by appropriate exponential Bsplines gives wavelet frames generated by functions whose Fourier transforms are compactly...... supported splines with geometrically distributed knot sequences. There is also a reverse transform, which yields pairs of dual Gabor frames when applied to certain wavelet frames....

  12. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    Science.gov (United States)

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  13. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  14. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  15. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    Science.gov (United States)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of

  16. Lock-in thermography using a cellphone attachment infrared camera

    Science.gov (United States)

    Razani, Marjan; Parkhimchyk, Artur; Tabatabaei, Nima

    2018-03-01

    Lock-in thermography (LIT) is a thermal-wave-based, non-destructive testing, technique which has been widely utilized in research settings for characterization and evaluation of biological and industrial materials. However, despite promising research outcomes, the wide spread adaptation of LIT in industry, and its commercialization, is hindered by the high cost of the infrared cameras used in the LIT setups. In this paper, we report on the feasibility of using inexpensive cellphone attachment infrared cameras for performing LIT. While the cost of such cameras is over two orders of magnitude less than their research-grade counterparts, our experimental results on block sample with subsurface defects and tooth with early dental caries suggest that acceptable performance can be achieved through careful instrumentation and implementation of proper data acquisition and image processing steps. We anticipate this study to pave the way for development of low-cost thermography systems and their commercialization as inexpensive tools for non-destructive testing of industrial samples as well as affordable clinical devices for diagnostic imaging of biological tissues.

  17. Lock-in thermography using a cellphone attachment infrared camera

    Directory of Open Access Journals (Sweden)

    Marjan Razani

    2018-03-01

    Full Text Available Lock-in thermography (LIT is a thermal-wave-based, non-destructive testing, technique which has been widely utilized in research settings for characterization and evaluation of biological and industrial materials. However, despite promising research outcomes, the wide spread adaptation of LIT in industry, and its commercialization, is hindered by the high cost of the infrared cameras used in the LIT setups. In this paper, we report on the feasibility of using inexpensive cellphone attachment infrared cameras for performing LIT. While the cost of such cameras is over two orders of magnitude less than their research-grade counterparts, our experimental results on block sample with subsurface defects and tooth with early dental caries suggest that acceptable performance can be achieved through careful instrumentation and implementation of proper data acquisition and image processing steps. We anticipate this study to pave the way for development of low-cost thermography systems and their commercialization as inexpensive tools for non-destructive testing of industrial samples as well as affordable clinical devices for diagnostic imaging of biological tissues.

  18. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    Science.gov (United States)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  19. An electronic pan/tilt/magnify and rotate camera system

    International Nuclear Information System (INIS)

    Zimmermann, S.; Martin, H.L.

    1992-01-01

    A new camera system has been developed for omnidirectional image-viewing applications that provides pan, tilt, magnify, and rotational orientation within a hemispherical field of view (FOV) without any moving parts. The imaging device is based on the fact that the image from a fish-eye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high-speed electronic circuitry. More specifically, an incoming fish-eye image from any image acquisition source is captured in the memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment. As a result, this device can accomplish the functions of pan, tilt, rotation, and magnification throughout a hemispherical FOV without the need for any mechanical devices. Multiple images, each with different image magnifications and pan-tilt-rotate parameters, can be obtained from a single camera

  20. Soft x-ray camera for internal shape and current density measurements on a noncircular tokamak

    International Nuclear Information System (INIS)

    Fonck, R.J.; Jaehnig, K.P.; Powell, E.T.; Reusch, M.; Roney, P.; Simon, M.P.

    1988-05-01

    Soft x-ray measurements of the internal plasma flux surface shaped in principle allow a determination of the plasma current density distribution, and provide a necessary monitor of the degree of internal elongation of tokamak plasmas with a noncircular cross section. A two-dimensional, tangentially viewing, soft x-ray pinhole camera has been fabricated to provide internal shape measurements on the PBX-M tokamak. It consists of a scintillator at the focal plane of a foil-filtered pinhole camera, which is, in turn, fiber optically coupled to an intensified framing video camera (/DELTA/t />=/ 3 msec). Automated data acquisition is performed on a stand-alone image-processing system, and data archiving and retrieval takes place on an optical disk video recorder. The entire diagnostic is controlled via a PDP-11/73 microcomputer. The derivation of the polodial emission distribution from the measured image is done by fitting to model profiles. 10 refs., 4 figs

  1. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Bell, P; Griffith, R; Hagans, K; Lerche, R; Allen, C; Davies, T; Janson, F; Justin, R; Marshall, B; Sweningsen, O

    2004-01-01

    The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied

  2. The Atacama Cosmology Telescope: The Receiver and Instrumentation

    Science.gov (United States)

    Swetz, D. S.; Ade, P. A. R.; Amiri, M.; Appel, J. W.; Burger, B.; Devlin, M. J.; Dicker, S. R.; Doriese, W. B.; Essinger-Hileman, T.; Fisher, R. P.; hide

    2010-01-01

    The Atacama Cosmology Telescope was designed to measure small-scale anisotropies in the Cosmic Microwave Background and detect galaxy clusters through the Sunyaev-Zel'dovich effect. The instrument is located on Cerro Taco in the Atacama Desert, at an altitude of 5190 meters. A six-met.er off-axis Gregorian telescope feeds a new type of cryogenic receiver, the Millimeter Bolometer Array Camera. The receiver features three WOO-element arrays of transition-edge sensor bolometers for observations at 148 GHz, 218 GHz, and 277 GHz. Each detector array is fed by free space mm-wave optics. Each frequency band has a field of view of approximately 22' x 26'. The telescope was commissioned in 2007 and has completed its third year of operations. We discuss the major components of the telescope, camera, and related systems, and summarize the instrument performance.

  3. High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera Tracking

    Science.gov (United States)

    Liss, J.; Dunagan, S. E.; Johnson, R. R.; Chang, C. S.; LeBlanc, S. E.; Shinozuka, Y.; Redemann, J.; Flynn, C. J.; Segal-Rosenhaimer, M.; Pistone, K.; Kacenelenbogen, M. S.; Fahey, L.

    2016-12-01

    High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera TrackingThe NASA Ames Sun-photometer-Satellite Group, DOE, PNNL Atmospheric Sciences and Global Change Division, and NASA Goddard's AERONET (AErosol RObotic NETwork) team recently collaborated on the development of a new airborne sunphotometry instrument that provides information on gases and aerosols extending far beyond what can be derived from discrete-channel direct-beam measurements, while preserving or enhancing many of the desirable AATS features (e.g., compactness, versatility, automation, reliability). The enhanced instrument combines the sun-tracking ability of the current 14-Channel NASA Ames AATS-14 with the sky-scanning ability of the ground-based AERONET Sun/sky photometers, while extending both AATS-14 and AERONET capabilities by providing full spectral information from the UV (350 nm) to the SWIR (1,700 nm). Strengths of this measurement approach include many more wavelengths (isolated from gas absorption features) that may be used to characterize aerosols and detailed (oversampled) measurements of the absorption features of specific gas constituents. The Sky Scanning Sun Tracking Airborne Radiometer (3STAR) replicates the radiometer functionality of the AATS-14 instrument but incorporates modern COTS technologies for all instruments subsystems. A 19-channel radiometer bundle design is borrowed from a commercial water column radiance instrument manufactured by Biospherical Instruments of San Diego California (ref, Morrow and Hooker)) and developed using NASA funds under the Small Business Innovative Research (SBIR) program. The 3STAR design also incorporates the latest in robotic motor technology embodied in Rotary actuators from Oriental motor Corp. having better than 15 arc seconds of positioning accuracy. Control system was designed, tested and simulated using a Hybrid-Dynamical modeling methodology. The design also replaces the classic quadrant detector tracking sensor with a

  4. Attribute Framing and Goal Framing Effects in Health Decisions.

    Science.gov (United States)

    Krishnamurthy, Parthasarathy; Carter, Patrick; Blair, Edward

    2001-07-01

    Levin, Schneider, and Gaeth (LSG, 1998) have distinguished among three types of framing-risky choice, attribute, and goal framing-to reconcile conflicting findings in the literature. In the research reported here, we focus on attribute and goal framing. LSG propose that positive frames should be more effective than negative frames in the context of attribute framing, and negative frames should be more effective than positive frames in the context of goal framing. We test this framework by manipulating frame valence (positive vs negative) and frame type (attribute vs goal) in a unified context with common procedures. We also argue that the nature of effects in a goal-framing context may depend on the extent to which the research topic has "intrinsic self-relevance" to the population. In the context of medical decision making, we operationalize low intrinsic self-relevance by using student subjects and high intrinsic self-relevance by using patients. As expected, we find complete support for the LSG framework under low intrinsic self-relevance and modified support for the LSG framework under high intrinsic self-relevance. Overall, our research appears to confirm and extend the LSG framework. Copyright 2001 Academic Press.

  5. Effects of Camera Arrangement on Perceptual-Motor Performance in Minimally Invasive Surgery

    Science.gov (United States)

    Delucia, Patricia R.; Griswold, John A.

    2011-01-01

    Minimally invasive surgery (MIS) is performed for a growing number of treatments. Whereas open surgery requires large incisions, MIS relies on small incisions through which instruments are inserted and tissues are visualized with a camera. MIS results in benefits for patients compared with open surgery, but degrades the surgeon's perceptual-motor…

  6. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  7. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  8. Initial clinical experience with dedicated ultra fast solid state cardiac gamma camera

    International Nuclear Information System (INIS)

    Aland, Nusrat; Lele, V.

    2010-01-01

    Full text: To analyze the imaging and diagnostic performance of new dedicated ultra fast solid state detector gamma camera and compare it with standard dual detector gamma camera in myocardial perfusion imaging. Material and Methods: In total 900 patients underwent myocardial perfusion imaging between 1st February 2010 and 29th August 2010 either stress/rest or rest/stress protocol. There was no age or gender bias (there were 630 males and 270 females). 5 and 15 mCi of 99m Tc - Tetrofosmin/MIBI was injected for 1st and 2nd part of the study respectively. Waiting period after injection was 20 min for regular stress and 40 min for pharmacological stress and 40 min after rest injection. Acquisition was performed on solid state detector gamma camera for a duration of 5 min and 3 min for 1st and 2nd part respectively. Interpretation of myocardial perfusion was done and QGS/QPS protocol was used for EF analysis. Out of these, 20 random patients underwent back to back myocardial perfusion SPECT imaging on standard dual detector gamma camera on same day. There was no age or gender bias (there were 9 males, 11 females). Acquisition time was 20 min for each part of the study. Interpretation was done using Autocard and EF analyses with 4 DM SPECT. Images obtained were then compared with those of solid state detector gamma camera. Result: Good quality and high count myocardial perfusion images were obtained with lesser amount of tracer activity on solid state detector gamma camera. Obese patients also showed good quality images with less tracer activity. As compared to conventional dual detector gamma camera images were brighter and showed better contrast with solid state gamma camera. Right ventricular imaging was better seen. Analyses of diastolic dysfunction was possible with 16 frame gated studies with solid state gamma camera. Shorter acquisition time with comfortable position reduced possibility of patient motion. All cardiac views were obtained with no movement of the

  9. The influence of distrubing effects on the performance of a wide field coded mask X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.R.; Turner, M.J.L.; Willingale, R.

    1985-01-01

    The coded aperture telescope, or Dicke camera, is seen as an instrument suitable for many applications in X-ray and gamma ray imaging. In this paper the effects of a partially obscuring window mask support or collimator, a detector with limited spatial resolution, and motion of the camera during image integration are considered using a computer simulation of the performance of such a camera. Cross correlation and the Wiener filter are used to deconvolve the data. It is shown that while these effects cause a degradation in performance this is in no case catastrophic. Deterioration of the image is shown to be greatest where strong sources are present in the field of view and is quite small (proportional 10%) when diffuse background is the major element. A comparison between the cyclic mask camera and the single mask camera is made under various conditions and it is shown the single mask camera has a moderate advantage particularly when imaging a wide field of view. (orig.)

  10. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  11. Collection of in-Field Impact Loads Acting on a Rugby Wheelchair Frame

    Directory of Open Access Journals (Sweden)

    Francesco Bettella

    2018-02-01

    Full Text Available This work was included in a wider project oriented to the improvement of residual neuromuscular skills in disabled athletes playing wheelchair rugby: the wheelchair rugby Italian national team was involved and tests allowed to analyse the impact loads on a rugby wheelchair frame. The frame of a rugby wheelchair offensive model, made by OffCarr Company, was instrumented with four strain gauge bridges in four different points. Then, three test types were conducted in laboratory: two static calibrations with the application of known loads, the first with horizontal load and the second with vertical load, and a dynamic horizontal calibration, impacting against a fix load cell in order to validate the results of horizontal static calibration. Finally, a test session took place in the field with the collaboration of two team players. The test consisted in voluntary frontal impacts between the two players, starting from 6 meters distance each other. The opponent of the instrumented wheelchair was a defender. From this test, the value of the horizontal load received by the frame in the impact instant was quantified. Moreover, also the vertical load acting on the wheelchair during the rebound of the player after the hit was evaluated: these informations were useful to the wheelchair frame manufacturer for the proper static, impact and fatigue design.

  12. Nonmonotonic belief state frames and reasoning frames

    NARCIS (Netherlands)

    Engelfriet, J.; Herre, H.; Treur, J.

    1995-01-01

    In this paper five levels of specification of nonmonotonic reasoning are distinguished. The notions of semantical frame, belief state frame and reasoning frame are introduced and used as a semantical basis for the first three levels. Moreover, the semantical connections between the levels are

  13. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras

    Science.gov (United States)

    Gómez Martínez, Harvey; Giorgi, Gabriele; Eissfeller, Bernd

    2017-10-01

    This paper presents a methodology for estimating the position and orientation of a rocket body in orbit - the target - undergoing a roto-translational motion, with respect to a chaser spacecraft, whose task is to match the target dynamics for a safe rendezvous. During the rendezvous maneuver the chaser employs a Time-of-Flight camera that acquires a point cloud of 3D coordinates mapping the sensed target surface. Once the system identifies the target, it initializes the chaser-to-target relative position and orientation. After initialization, a tracking procedure enables the system to sense the evolution of the target's pose between frames. The proposed algorithm is evaluated using simulated point clouds, generated with a CAD model of the Cosmos-3M upper stage and the PMD CamCube 3.0 camera specifications.

  14. Safeguards instrumentation: past, present, future

    International Nuclear Information System (INIS)

    Higinbotham, W.A.

    1982-01-01

    Instruments are essential for accounting, for surveillance and for protection of nuclear materials. The development and application of such instrumentation is reviewed, with special attention to international safeguards applications. Active and passive nondestructive assay techniques are some 25 years of age. The important advances have been in learning how to use them effectively for specific applications, accompanied by major advances in radiation detectors, electronics, and, more recently, in mini-computers. The progress in seals has been disappointingly slow. Surveillance cameras have been widely used for many applications other than safeguards. The revolution in TV technology will have important implications. More sophisticated containment/surveillance equipment is being developed but has yet to be exploited. On the basis of this history, some expectations for instrumentation in the near future are presented

  15. A new approach to the form and position error measurement of the auto frame surface based on laser

    Science.gov (United States)

    Wang, Hua; Li, Wei

    2013-03-01

    Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.

  16. 21 CFR 882.4560 - Stereotaxic instrument.

    Science.gov (United States)

    2010-04-01

    ...) Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system. (b) Classification. Class II (performance standards). ...

  17. A reference Pelton turbine - High speed visualization in the rotating frame

    Science.gov (United States)

    Solemslie, Bjørn W.; Dahlhaug, Ole G.

    2016-11-01

    To enable a detailed study the flow mechanisms effecting the flow within the reference Pelton runner designed at the Waterpower Laboratory (NTNLT) a flow visualization system has been developed. The system enables high speed filming of the hydraulic surface of a single bucket in the rotating frame of reference. It is built with an angular borescopes adapter entering the turbine along the rotational axis and a borescope embedded within a bucket. A stationary high speed camera located outside the turbine housing has been connected to the optical arrangement by a non-contact coupling. The view point of the system includes the whole hydraulic surface of one half of a bucket. The system has been designed to minimize the amount of vibrations and to ensure that the vibrations felt by the borescope are the same as those affecting the camera. The preliminary results captured with the system are promising and enable a detailed study of the flow within the turbine.

  18. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  19. Optimizing Low Light Level Imaging Techniques and Sensor Design Parameters using CCD Digital Cameras for Potential NASA Earth Science Research aboard a Small Satellite or ISS

    Data.gov (United States)

    National Aeronautics and Space Administration — For this project, the potential of using state-of-the-art aerial digital framing cameras that have time delayed integration (TDI) to acquire useful low light level...

  20. Face antispoofing based on frame difference and multilevel representation

    Science.gov (United States)

    Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad

    2017-07-01

    Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.

  1. Reducing flicker due to ambient illumination in camera captured images

    Science.gov (United States)

    Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.

    2013-02-01

    The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.

  2. Quality control of nuclear medicine instruments, 1991

    International Nuclear Information System (INIS)

    1996-12-01

    This document gives detailed guidance on the quality control of various instruments used in nuclear medicine. A first preliminary document was drawn up in 1979. A revised and extended version, incorporating recommended procedures, test schedules and protocols was prepared in 1982. The first edition of 'Quality Control of Nuclear Medicine Instruments', IAEA-TECDOC-317, was printed in late 1984. Recent advances in the field of nuclear medicine imaging made it necessary to add a chapter on Camera-Computer Systems and another on SPECT Systems

  3. Quality control of nuclear medicine instruments 1991

    International Nuclear Information System (INIS)

    1991-05-01

    This document gives detailed guidance on the quality control of various instruments used in nuclear medicine. A first preliminary document was drawn up in 1979. A revised and extended version, incorporating recommended procedures, test schedules and protocols was prepared in 1982. The first edition of ''Quality Control of Nuclear Medicine Instruments'', IAEA-TECDOC-317, was printed in late 1984. Recent advances in the field of nuclear medicine imaging made it necessary to add a chapter on Camera-Computer Systems and another on SPECT Systems. Figs and tabs

  4. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  5. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  6. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    Science.gov (United States)

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  7. Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.

    Science.gov (United States)

    Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael

    2016-11-01

    To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.

  8. A new apparatus for track-analysis in nuclear track emulsion based on a CCD-camera device

    International Nuclear Information System (INIS)

    Ganssauge, E.

    1993-01-01

    A CCD camera-based, image-analyzing system for automatic evaluation of nuclear track emulsion chambers is presented. The stage of a normal microscope moves using three remote controlled stepping motors with a step size of 0.25 μm. A CCD-camera is mounted on tope of the microscope in order to register the nuclear emulsion. The camera has a resolution capable of differentiating single emulsion-grains (0.6 μm). The camera picture is transformed from analogue to digital signals and stored by a frame grabber. Some background-picture elements can be eliminated by applying cuts on grey levels. The central computer processes the picture, correlates the single picture points, the coordinates and the grey-levels, such that in the end one has a unique assignment of each picture point to an address on the hard disk for a given plate. After repetition of this procedure for several plates by means of an appropriate software (for instance our vertex program [1]). the coordinates of the points are combined to tracks, and a variety of distributions like pseudorapidity-distributions can be calculated and presented on the terminal. (author)

  9. Development of an integrated response generator for Si/CdTe semiconductor Compton cameras

    International Nuclear Information System (INIS)

    Odaka, Hirokazu; Sugimoto, Soichiro; Ishikawa, Shin-nosuke; Katsuta, Junichiro; Koseki, Yuu; Fukuyama, Taro; Saito, Shinya; Sato, Rie; Sato, Goro; Watanabe, Shin

    2010-01-01

    We have developed an integrated response generator based on Monte Carlo simulation for Compton cameras composed of silicon (Si) and cadmium telluride (CdTe) semiconductor detectors. In order to construct an accurate detector response function, the simulation is required to include a comprehensive treatment of the semiconductor detector devices and the data processing system in addition to simulating particle tracking. Although CdTe is an excellent semiconductor material for detection of soft gamma rays, its ineffective charge transport property distorts its spectral response. We investigated the response of CdTe pad detectors in the simulation and present our initial results here. We also performed the full simulation of prototypes of Si/CdTe semiconductor Compton cameras and report on the reproducibility of detection efficiencies and angular resolutions of the cameras, both of which are essential performance parameters of astrophysical instruments.

  10. Development, characterization, and modeling of a tunable filter camera

    Science.gov (United States)

    Sartor, Mark Alan

    1999-10-01

    This paper describes the development, characterization, and modeling of a Tunable Filter Camera (TFC). The TFC is a new multispectral instrument with electronically tuned spectral filtering and low-light-level sensitivity. It represents a hybrid between hyperspectral and multispectral imaging spectrometers that incorporates advantages from each, addressing issues such as complexity, cost, lack of sensitivity, and adaptability. These capabilities allow the TFC to be applied to low- altitude video surveillance for real-time spectral and spatial target detection and image exploitation. Described herein are the theory and principles of operation for the TFC, which includes a liquid crystal tunable filter, an intensified CCD, and a custom apochromatic lens. The results of proof-of-concept testing, and characterization of two prototype cameras are included, along with a summary of the design analyses for the development of a multiple-channel system. A significant result of this effort was the creation of a system-level model, which was used to facilitate development and predict performance. It includes models for the liquid crystal tunable filter and intensified CCD. Such modeling was necessary in the design of the system and is useful for evaluation of the system in remote-sensing applications. Also presented are characterization data from component testing, which included quantitative results for linearity, signal to noise ratio (SNR), linearity, and radiometric response. These data were used to help refine and validate the model. For a pre-defined source, the spatial and spectral response, and the noise of the camera, system can now be predicted. The innovation that sets this development apart is the fact that this instrument has been designed for integrated, multi-channel operation for the express purpose of real-time detection/identification in low- light-level conditions. Many of the requirements for the TFC were derived from this mission. In order to provide

  11. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    Science.gov (United States)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  12. Geocam Space: Enhancing Handheld Digital Camera Imagery from the International Space Station for Research and Applications

    Science.gov (United States)

    Stefanov, William L.; Lee, Yeon Jin; Dille, Michael

    2016-01-01

    information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant

  13. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  14. Experimental Studies on Damage Detection in Frame Structures Using Vibration Measurements

    Directory of Open Access Journals (Sweden)

    Giancarlo Fraraccio

    2010-01-01

    Full Text Available This paper presents an experimental study of frequency and time domain identification algorithms and discusses their effectiveness in structural health monitoring of frame structures using acceleration input and response data. Three algorithms were considered: 1 a frequency domain decomposition algorithm (FDD, 2 a time domain Observer Kalman IDentification algorithm (OKID, and 3 a subsequent physical parameter identification algorithm (MLK. Through experimental testing of a four-story steel frame model on a uniaxial shake table, the inherent complications of physical instrumentation and testing are explored. Primarily, this study aims to provide a dependable first-order and second-order identification of said test structure in a fully instrumented state. Once the characteristics (i.e. the stiffness matrix for a benchmark structure have been determined, structural damage can be detected by a change in the identified structural stiffness matrix. This work also analyzes the stability of the identified structural stiffness matrix with respect to fluctuations of input excitation magnitude and frequency content in an experimental setting.

  15. Invited Article: Deep Impact instrument calibration.

    Science.gov (United States)

    Klaasen, Kenneth P; A'Hearn, Michael F; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Farnham, Tony; Groussin, Olivier; Hampton, Donald; Ipatov, Sergei; Li, Jianyang; Lisse, Carey; Mastrodemos, Nickolaos; McLaughlin, Stephanie; Sunshine, Jessica; Thomas, Peter; Wellnitz, Dennis

    2008-09-01

    Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [ approximately 1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of approximately 9 pixels. The charge coupled device (CCD) read noise is approximately 1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to approximately 1%. Spectrometer read noise is approximately 2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to approximately 10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of approximately 2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to approximately 0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.

  16. Invited Article: Deep Impact instrument calibration

    International Nuclear Information System (INIS)

    Klaasen, Kenneth P.; Mastrodemos, Nickolaos; A'Hearn, Michael F.; Farnham, Tony; Groussin, Olivier; Ipatov, Sergei; Li Jianyang; McLaughlin, Stephanie; Sunshine, Jessica; Wellnitz, Dennis; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Thomas, Peter; Hampton, Donald; Lisse, Carey

    2008-01-01

    Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [∼1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of ∼9 pixels. The charge coupled device (CCD) read noise is ∼1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to ∼1%. Spectrometer read noise is ∼2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to ∼10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of ∼2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to ∼0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.

  17. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  18. Media Framing

    DEFF Research Database (Denmark)

    Pedersen, Rasmus T.

    2017-01-01

    The concept of media framing refers to the way in which the news media organize and provide meaning to a news story by emphasizing some parts of reality and disregarding other parts. These patterns of emphasis and exclusion in news coverage create frames that can have considerable effects on news...... consumers’ perceptions and attitudes regarding the given issue or event. This entry briefly elaborates on the concept of media framing, presents key types of media frames, and introduces the research on media framing effects....

  19. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.

    Science.gov (United States)

    Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  20. Procedure for fully automatic orientation of camera in digital close-range photogrammetry

    Science.gov (United States)

    Huang, Yong Ru; Trinder, John C.

    1994-03-01

    This paper presents an automatic procedure of camera orientation developed for a digital close-range photogrammetric system. In this application, small bright balls mounted on a calibration frame serve as control points, since their shape in an image is invariant to the camera position and they are always imaged as circles. To recognize the circles in the image, an edge detection algorithm is exploited to extract the circular edges with subpixel accuracy. The circles are recognized by matching the shape of these edges with the shape of an ideal circular target. The central location of the circles and their diameters can be determined from these edge points. The determination of the identification of the circles is a problem of artificial intelligence. The list of the circles in the image must be arranged in the order of the balls in 3D world. A fast search is described that is based on exploiting the available information in order to limit the number of possible alternative orders of the targets. In this way, the search can be achieved efficiently. The identification of circles results in the correct numbers being attached to the corresponding circles. Finally, the precise camera parameters are calculated by bundle adjustment.

  1. Framing effects over time: comparing affective and cognitive news frames

    NARCIS (Netherlands)

    Lecheler, S.; Matthes, J.

    2012-01-01

    A growing number of scholars examine the duration of framing effects. However, duration is likely to differ from frame to frame, depending on how strong a frame is. This strength is likely to be enhanced by adding emotional components to a frame. By means of an experimental survey design (n = 111),

  2. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  3. CARMENES instrument overview

    Science.gov (United States)

    Quirrenbach, A.; Amado, P. J.; Caballero, J. A.; Mundt, R.; Reiners, A.; Ribas, I.; Seifert, W.; Abril, M.; Aceituno, J.; Alonso-Floriano, F. J.; Ammler-von Eiff, M.; Antona Jiménez, R.; Anwand-Heerwart, H.; Azzaro, M.; Bauer, F.; Barrado, D.; Becerril, S.; Béjar, V. J. S.; Benítez, D.; Berdiñas, Z. M.; Cárdenas, M. C.; Casal, E.; Claret, A.; Colomé, J.; Cortés-Contreras, M.; Czesla, S.; Doellinger, M.; Dreizler, S.; Feiz, C.; Fernández, M.; Galadí, D.; Gálvez-Ortiz, M. C.; García-Piquer, A.; García-Vargas, M. L.; Garrido, R.; Gesa, L.; Gómez Galera, V.; González Álvarez, E.; González Hernández, J. I.; Grözinger, U.; Guàrdia, J.; Guenther, E. W.; de Guindos, E.; Gutiérrez-Soto, J.; Hagen, H.-J.; Hatzes, A. P.; Hauschildt, P. H.; Helmling, J.; Henning, T.; Hermann, D.; Hernández Castaño, L.; Herrero, E.; Hidalgo, D.; Holgado, G.; Huber, A.; Huber, K. F.; Jeffers, S.; Joergens, V.; de Juan, E.; Kehr, M.; Klein, R.; Kürster, M.; Lamert, A.; Lalitha, S.; Laun, W.; Lemke, U.; Lenzen, R.; López del Fresno, Mauro; López Martí, B.; López-Santiago, J.; Mall, U.; Mandel, H.; Martín, E. L.; Martín-Ruiz, S.; Martínez-Rodríguez, H.; Marvin, C. J.; Mathar, R. J.; Mirabet, E.; Montes, D.; Morales Muñoz, R.; Moya, A.; Naranjo, V.; Ofir, A.; Oreiro, R.; Pallé, E.; Panduro, J.; Passegger, V.-M.; Pérez-Calpena, A.; Pérez Medialdea, D.; Perger, M.; Pluto, M.; Ramón, A.; Rebolo, R.; Redondo, P.; Reffert, S.; Reinhardt, S.; Rhode, P.; Rix, H.-W.; Rodler, F.; Rodríguez, E.; Rodríguez-López, C.; Rodríguez-Pérez, E.; Rohloff, R.-R.; Rosich, A.; Sánchez-Blanco, E.; Sánchez Carrasco, M. A.; Sanz-Forcada, J.; Sarmiento, L. F.; Schäfer, S.; Schiller, J.; Schmidt, C.; Schmitt, J. H. M. M.; Solano, E.; Stahl, O.; Storz, C.; Stürmer, J.; Suárez, J. C.; Ulbrich, R. G.; Veredas, G.; Wagner, K.; Winkler, J.; Zapatero Osorio, M. R.; Zechmeister, M.; Abellán de Paco, F. J.; Anglada-Escudé, G.; del Burgo, C.; Klutsch, A.; Lizon, J. L.; López-Morales, M.; Morales, J. C.; Perryman, M. A. C.; Tulloch, S. M.; Xu, W.

    2014-07-01

    This paper gives an overview of the CARMENES instrument and of the survey that will be carried out with it during the first years of operation. CARMENES (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Echelle Spectrographs) is a next-generation radial-velocity instrument under construction for the 3.5m telescope at the Calar Alto Observatory by a consortium of eleven Spanish and German institutions. The scientific goal of the project is conducting a 600-night exoplanet survey targeting ~ 300 M dwarfs with the completed instrument. The CARMENES instrument consists of two separate echelle spectrographs covering the wavelength range from 0.55 to 1.7 μm at a spectral resolution of R = 82,000, fed by fibers from the Cassegrain focus of the telescope. The spectrographs are housed in vacuum tanks providing the temperature-stabilized environments necessary to enable a 1 m/s radial velocity precision employing a simultaneous calibration with an emission-line lamp or with a Fabry-Perot etalon. For mid-M to late-M spectral types, the wavelength range around 1.0 μm (Y band) is the most important wavelength region for radial velocity work. Therefore, the efficiency of CARMENES has been optimized in this range. The CARMENES instrument consists of two spectrographs, one equipped with a 4k x 4k pixel CCD for the range 0.55 - 1.05 μm, and one with two 2k x 2k pixel HgCdTe detectors for the range from 0.95 - 1.7μm. Each spectrograph will be coupled to the 3.5m telescope with two optical fibers, one for the target, and one for calibration light. The front end contains a dichroic beam splitter and an atmospheric dispersion corrector, to feed the light into the fibers leading to the spectrographs. Guiding is performed with a separate camera; on-axis as well as off-axis guiding modes are implemented. Fibers with octagonal cross-section are employed to ensure good stability of the output in the presence of residual guiding errors. The

  4. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  5. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  6. Riesz frames and approximation of the frame coefficients

    DEFF Research Database (Denmark)

    Casazza, P.; Christensen, Ole

    1998-01-01

    A frame is a fmaily {f i } i=1 ∞ of elements in a Hilbert space with the property that every element in can be written as a (infinite) linear combination of the frame elements. Frame theory describes how one can choose the corresponding coefficients, which are called frame coefficients. From...... the mathematical point of view this is gratifying, but for applications it is a problem that the calculation requires inversion of an operator on . The projection method is introduced in order to avoid this problem. The basic idea is to consider finite subfamilies {f i } i=1 n of the frame and the orthogonal...... projection Pn onto its span. For has a representation as a linear combination of fi, i=1,2,..., n and the corresponding coefficients can be calculated using finite dimensional methods. We find conditions implying that those coefficients converge to the correct frame coefficients as n→∞, in which case we have...

  7. Frames of exponentials:lower frame bounds for finite subfamilies, and approximation of the inverse frame operator

    DEFF Research Database (Denmark)

    Christensen, Ole; Lindner, Alexander M

    2001-01-01

    We give lower frame bounds for finite subfamilies of a frame of exponentials {e(i lambdak(.))}k is an element ofZ in L-2(-pi,pi). We also present a method for approximation of the inverse frame operator corresponding to {e(i lambdak(.))}k is an element ofZ, where knowledge of the frame bounds for...

  8. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  9. An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training.

    Science.gov (United States)

    Loukas, Constantinos; Lahanas, Vasileios; Georgiou, Evangelos

    2013-12-01

    Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  11. Study on the communication technology of instrument based on LabVIEW

    International Nuclear Information System (INIS)

    Jiang Wei; Lai Qinggui; Zhang Xiaobo

    2012-01-01

    The hardware and software structure of communication of universal instrument is discussed based on LabVIEW, the several realization of remote communication is compared too. In the control and measure system of LIA, using LabVIEW, the communication is realized among the plenty of instruments which have the various interfaces, in this paper the frame of hardware and software about instrument communication is showed. (authors)

  12. Images of Edge Turbulence in NSTX

    International Nuclear Information System (INIS)

    Zweben, S.J.; Bush, C.E.; Maqueda, R.; Munsat, T.; Stotler, D.; Lowrance, J.; Mastracola, V.; Renda, G.

    2004-01-01

    The 2-D structure of edge plasma turbulence has been measured in the National Spherical Torus Experiment (NSTX) by viewing the emission of the Da spectral line of deuterium. Images have been made at framing rates of up to 250,000 frames/sec using an ultra-high speed CCD camera developed by Princeton Scientific Instruments. A sequence of images showing the transition between L-mode and H-mode states is shown

  13. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    Science.gov (United States)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  14. Prime tight frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Miller, Christopher; Okoudjou, Kasso A.

    2014-01-01

    to suggest effective analysis and synthesis computation strategies for such frames. Finally, we describe all prime frames constructed from the spectral tetris method, and, as a byproduct, we obtain a characterization of when the spectral tetris construction works for redundancies below two.......We introduce a class of finite tight frames called prime tight frames and prove some of their elementary properties. In particular, we show that any finite tight frame can be written as a union of prime tight frames. We then characterize all prime harmonic tight frames and use thischaracterization...

  15. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  16. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  17. Quantum frames

    Science.gov (United States)

    Brown, Matthew J.

    2014-02-01

    The framework of quantum frames can help unravel some of the interpretive difficulties i the foundation of quantum mechanics. In this paper, I begin by tracing the origins of this concept in Bohr's discussion of quantum theory and his theory of complementarity. Engaging with various interpreters and followers of Bohr, I argue that the correct account of quantum frames must be extended beyond literal space-time reference frames to frames defined by relations between a quantum system and the exosystem or external physical frame, of which measurement contexts are a particularly important example. This approach provides superior solutions to key EPR-type measurement and locality paradoxes.

  18. Frame scaling function sets and frame wavelet sets in Rd

    International Nuclear Information System (INIS)

    Liu Zhanwei; Hu Guoen; Wu Guochang

    2009-01-01

    In this paper, we classify frame wavelet sets and frame scaling function sets in higher dimensions. Firstly, we obtain a necessary condition for a set to be the frame wavelet sets. Then, we present a necessary and sufficient condition for a set to be a frame scaling function set. We give a property of frame scaling function sets, too. Some corresponding examples are given to prove our theory in each section.

  19. Signal Conditioning in Process of High Speed Imaging

    Directory of Open Access Journals (Sweden)

    Libor Hargas

    2015-01-01

    Full Text Available The accuracy of cinematic analysis with camera system depends on frame rate of used camera. Specific case of cinematic analysis is in medical research focusing on microscopic objects moving with high frequencies (cilia of respiratory epithelium. The signal acquired by high speed video acquisition system has very amount of data. This paper describes hardware parts, signal condition and software, which is used for image acquiring thru digital camera, intelligent illumination dimming hardware control and ROI statistic creation. All software parts are realized as virtual instruments.

  20. Relationship between the Pedaling Biomechanics and Strain of Bicycle Frame during Submaximal Tests

    OpenAIRE

    Manolova, Aneliya; Crequy, Samuel; Lestriez, Philippe; Debraux, Pierre; Bertucci, William

    2015-01-01

    The aim of this study was to analyse the effect of forces applied to pedals and cranks on the strain imposed to an instrumented bicycle motocross (BMX) frame. Using results from a finite element analysis to determine the localisation of highest stress, eight strain gauges were located on the down tube, the seat tube and the right chain stay. Before the pedaling tests, static loads were applied to the frame during bench tests. Two pedaling conditions have been analysed. In the first, the rider...

  1. Renovation of PARR instrumentation and controls

    International Nuclear Information System (INIS)

    Karim, A.; Haq, I.; Akhtar, K.M.; Alam, G.D.

    1987-01-01

    The Pakistan research reactor (PARR) was commissioned in 1965 and operated since then in accordance with the requirements. In the first instance, it was proposed that the controls and instrumentation be modernized according to the state of current technology and for meeting the more stringent safety, and operational needs. A computer has been added for data acquisition, logging and analysis. A closed circuit television system has been installed to monitor access of personnel to the reactor building and for viewing the reactor core with an underwater camera. This report gives a brief account of the old instrumentation and some details of the new replacements. (orig./A.B)

  2. Safeguarding on-power fuelled reactors - instrumentation and techniques

    International Nuclear Information System (INIS)

    Waligura, A.; Konnov, Y.; Smith, R.M.; Head, D.A.

    1977-01-01

    Instrumentation and techniques applicable to safeguarding reactors that are fuelled on-power, particularly the CANDU type, have been developed. A demonstration is being carried out at the Douglas Point Nuclear Generating Station in Canada. Irradiated nuclear materials in certain areas - the reactor and spent fuel storage bays - are monitored using photographic and television cameras, and seals. Item accounting is applied by counting spent-fuel bundles during transfer from the reactor to the storage bay and by placing these spent-fuel bundles in a sealed enclosure. Provision is made for inspection and verification of the bundles before sealing. The reactor's power history is recorded by a track-etch power monitor. Redundancy is provided so that the failure of any single piece of equipment does not invalidate the entire safeguards system. Several safeguards instruments and devices have beeen developed and evaluated. These include a super-8 mm surveillance camera system, a television surveillance system, a spent-fuel bundle counter, a device to detect dummy fuel bundles, a cover for enclosing a stack of spent-fuel bundles, and a seal suitable for underwater installation and ultrasonic interrogation. The information provided by these different instruments should increase the effectiveness of Agency safeguards and, when used in combination with other measures, will facilitate inspection at reactor sites

  3. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

    Science.gov (United States)

    Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James

    2017-01-01

    A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.

  4. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  5. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  6. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  7. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  8. Robust super-resolution by fusion of interpolated frames for color and grayscale images

    Directory of Open Access Journals (Sweden)

    Barry eKarch

    2015-04-01

    Full Text Available Multi-frame super-resolution (SR processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts. The key to effective multi-frame SR is accurate subpixel inter-frame registration. This accurate registration is challenging when the motion does not obey a simple global translational model and may include local motion. SR processing is further complicated when the camera uses a division-of-focal-plane (DoFP sensor, such as the Bayer color filter array. Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and DoFP sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to deconvolve the modeled system PSF. The proposed FIF approach has a lower computational complexity than most iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. The experiments include airborne grayscale imagery and Bayer color array images with affine background motion plus local motion.

  9. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    Science.gov (United States)

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  10. A study on obstacle detection method of the frontal view using a camera on highway

    Science.gov (United States)

    Nguyen, Van-Quang; Park, Jeonghyeon; Seo, Changjun; Kim, Heungseob; Boo, Kwangsuck

    2018-03-01

    In this work, we introduce an approach to detect vehicles for driver assistance, or warning system. For driver assistance system, it must detect both lanes (left and right side lane), and discover vehicles ahead of the test vehicle. Therefore, in this study, we use a camera, it is installed on the windscreen of the test vehicle. Images from the camera are used to detect three lanes, and detect multiple vehicles. In lane detection, line detection and vanishing point estimation are used. For the vehicle detection, we combine the horizontal and vertical edge detection, the horizontal edge is used to detect the vehicle candidates, and then the vertical edge detection is used to verify the vehicle candidates. The proposed algorithm works with of 480 × 640 image frame resolution. The system was tested on the highway in Korea.

  11. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  12. Riesz Frames and Approximation of the Frame Coefficients

    DEFF Research Database (Denmark)

    Christensen, Ole

    1996-01-01

    A frame is a familyof elements in a Hilbert space with the propertythat every element in the Hilbert space can be written as a (infinite)linear combination of the frame elements. Frame theory describes howone can choose the corresponding coefficients, which are calledframe coefficients. From...... the mathematical point of view this isgratifying, but for applications it is a problem that the calculationrequires inversion of an operator on the Hilbert space.The projection method is introduced in order to avoid this problem.The basic idea is to consider finite subfamiliesof the frame and the orthogonal...... projection onto its span. Forfin QTR H,P_nf has a representation as a linear combinationof f_i,i=1,2,..,n, and the corresponding coefficients can be calculatedusing finite dimensional methods. We find conditions implying that thosecoefficients converge to the correct frame coefficients as n goes...

  13. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    Science.gov (United States)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  14. Characterization results from several commercial soft X-ray streak cameras

    Science.gov (United States)

    Stradling, G. L.; Studebaker, J. K.; Cavailler, C.; Launspach, J.; Planes, J.

    The spatio-temporal performance of four soft X-ray streak cameras has been characterized. The objective in evaluating the performance capability of these instruments is to enable us to optimize experiment designs, to encourage quantitative analysis of streak data and to educate the ultra high speed photography and photonics community about the X-ray detector performance which is available. These measurements have been made collaboratively over the space of two years at the Forge pulsed X-ray source at Los Alamos and at the Ketjak laser facility an CEA Limeil-Valenton. The X-ray pulse lengths used for these measurements at these facilities were 150 psec and 50 psec respectively. The results are presented as dynamically-measured modulation transfer functions. Limiting temporal resolution values were also calculated. Emphasis is placed upon shot noise statistical limitations in the analysis of the data. Space charge repulsion in the streak tube limits the peak flux at ultra short experiments duration times. This limit results in a reduction of total signal and a decrease in signal to no ise ratio in the streak image. The four cameras perform well with 20 1p/mm resolution discernable in data from the French C650X, the Hadland X-Chron 540 and the Hamamatsu C1936X streak cameras. The Kentech X-ray streak camera has lower modulation and does not resolve below 10 1p/mm but has a longer photocathode.

  15. Framing of mobility items: a source of poor agreement between preference-based health-related quality of life instruments in a population of individuals receiving assisted ventilation.

    Science.gov (United States)

    Hannan, Liam M; Whitehurst, David G T; Bryan, Stirling; Road, Jeremy D; McDonald, Christine F; Berlowitz, David J; Howard, Mark E

    2017-06-01

    To explore the influence of descriptive differences in items evaluating mobility on index scores generated from two generic preference-based health-related quality of life (HRQoL) instruments. The study examined cross-sectional data from a postal survey of individuals receiving assisted ventilation in two state/province-wide home mechanical ventilation services, one in British Columbia, Canada and the other in Victoria, Australia. The Assessment of Quality of Life 8-dimension (AQoL-8D) and the EQ-5D-5L were included in the data collection. Graphical illustrations, descriptive statistics, and measures of agreement [intraclass correlation coefficients (ICCs) and Bland-Altman plots] were examined using index scores derived from both instruments. Analyses were performed on the full sample as well as subgroups defined according to respondents' self-reported ability to walk. Of 868 individuals receiving assisted ventilation, 481 (55.4%) completed the questionnaire. Mean index scores were 0.581 (AQoL-8D) and 0.566 (EQ-5D-5L) with 'moderate' agreement demonstrated between the two instruments (ICC = 0.642). One hundred fifty-nine (33.1%) reported level 5 ('I am unable to walk about') on the EQ-5D-5L Mobility item. The walking status of respondents had a marked influence on the comparability of index scores, with a larger mean difference (0.206) and 'slight' agreement (ICC = 0.386) observed when the non-ambulant subgroup was evaluated separately. This study provides further evidence that between-measure discrepancies between preference-based HRQoL instruments are related in part to the framing of mobility-related items. Longitudinal studies are necessary to determine the responsiveness of preference-based HRQoL instruments in cohorts that include non-ambulant individuals.

  16. Frames and counter-frames giving meaning to dementia: a framing analysis of media content.

    Science.gov (United States)

    Van Gorp, Baldwin; Vercruysse, Tom

    2012-04-01

    Media tend to reinforce the stigmatization of dementia as one of the most dreaded diseases in western society, which may have repercussions on the quality of life of those with the illness. The persons with dementia, but also those around them become imbued with the idea that life comes to an end as soon as the diagnosis is pronounced. The aim of this paper is to understand the dominant images related to dementia by means of an inductive framing analysis. The sample is composed of newspaper articles from six Belgian newspapers (2008-2010) and a convenience sample of popular images of the condition in movies, documentaries, literature and health care communications. The results demonstrate that the most dominant frame postulates that a human being is composed of two distinct parts: a material body and an immaterial mind. If this frame is used, the person with dementia ends up with no identity, which is in opposition to the Western ideals of personal self-fulfilment and individualism. For each dominant frame an alternative counter-frame is defined. It is concluded that the relative absence of counter-frames confirms the negative image of dementia. The inventory might be a help for caregivers and other professionals who want to evaluate their communication strategy. It is discussed that a more resolute use of counter-frames in communication about dementia might mitigate the stigma that surrounds dementia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. a Circleless "2D/3D Total STATION": a Low Cost Instrument for Surveying, Recording Point Clouds, Documentation, Image Acquisition and Visualisation

    Science.gov (United States)

    Scherer, M.

    2013-07-01

    Hardware and software of the universally applicable instrument - referred to as a 2D/3D total station - are described here, as well as its practical use. At its core it consists of a 3D camera - often also called a ToF camera, a pmd camera or a RIM-camera - combined with a common industrial 2D camera. The cameras are rigidly coupled with their optical axes in parallel. A new type of instrument was created mounting this 2D/3D system on a tripod in a specific way. Because of it sharing certain characteristics with a total station and a tacheometer, respectively, the new device was called a 2D/3D total station. It may effectively replace a common total station or a laser scanner in some respects. After a brief overview of the prototype's features this paper then focuses on the methodological characteristics for practical application. Its usability as a universally applicable stand-alone instrument is demonstrated for surveying, recording RGB-coloured point clouds as well as delivering images for documentation and visualisation. Because of its limited range (10m without reflector and 150 m to reflector prisms) and low range accuracy (ca. 2 cm to 3 cm) compared to present-day total stations and laser scanners, the practical usage of the 2D/3D total station is currently limited to acquisition of accidents, forensic purpuses, speleology or facility management, as well as architectural recordings with low requirements regarding accuracy. However, the author is convinced that in the near future advancements in 3D camera technology will allow this type of comparatively low cost instrument to replace the total station as well as the laser scanner in an increasing number of areas.

  18. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  19. Optical Comb Generation for Streak Camera Calibration for Inertial Confinement Fusion Experiments

    International Nuclear Information System (INIS)

    Ronald Justin; Terence Davies; Frans Janson; Bruce Marshall; Perry Bell; Daniel Kalantar; Joseph Kimbrough; Stephen Vernon; Oliver Sweningsen

    2008-01-01

    The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) is coming on-line to support physics experimentation for the U.S. Department of Energy (DOE) programs in Inertial Confinement Fusion (ICF) and Stockpile Stewardship (SS). Optical streak cameras are an integral part of the experimental diagnostics instrumentation at NIF. To accurately reduce streak camera data a highly accurate temporal calibration is required. This article describes a technique for simultaneously generating a precise +/- 2 ps optical marker pulse (fiducial reference) and trains of precisely timed, short-duration optical pulses (so-called 'comb' pulse trains) that are suitable for the timing calibrations. These optical pulse generators are used with the LLNL optical streak cameras. They are small, portable light sources that, in the comb mode, produce a series of temporally short, uniformly spaced optical pulses, using a laser diode source. Comb generators have been produced with pulse-train repetition rates up to 10 GHz at 780 nm, and somewhat lower frequencies at 664 nm. Individual pulses can be as short as 25-ps FWHM. Signal output is via a fiber-optic connector on the front panel of the generator box. The optical signal is transported from comb generator to streak camera through multi-mode, graded-index optical fiber

  20. ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument

    Science.gov (United States)

    Fagrelius, Parker; Abareshi, Behzad; Allen, Lori; Ballester, Otger; Baltay, Charles; Besuner, Robert; Buckley-Geer, Elizabeth; Butler, Karen; Cardiel, Laia; Dey, Arjun; Duan, Yutong; Elliott, Ann; Emmet, William; Gershkovich, Irena; Honscheid, Klaus; Illa, Jose M.; Jimenez, Jorge; Joyce, Richard; Karcher, Armin; Kent, Stephen; Lambert, Andrew; Lampton, Michael; Levi, Michael; Manser, Christopher; Marshall, Robert; Martini, Paul; Paat, Anthony; Probst, Ronald; Rabinowitz, David; Reil, Kevin; Robertson, Amy; Rockosi, Connie; Schlegel, David; Schubnell, Michael; Serrano, Santiago; Silber, Joseph; Soto, Christian; Sprayberry, David; Summers, David; Tarlé, Greg; Weaver, Benjamin A.

    2018-02-01

    The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was an on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. Lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.

  1. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  2. Body frames and frame singularities for three-atom systems

    International Nuclear Information System (INIS)

    Littlejohn, R.G.; Mitchell, K.A.; Aquilanti, V.; Cavalli, S.

    1998-01-01

    The subject of body frames and their singularities for three-particle systems is important not only for large-amplitude rovibrational coupling in molecular spectroscopy, but also for reactive scattering calculations. This paper presents a geometrical analysis of the meaning of body frame conventions and their singularities in three-particle systems. Special attention is devoted to the principal axis frame, a certain version of the Eckart frame, and the topological inevitability of frame singularities. The emphasis is on a geometrical picture, which is intended as a preliminary study for the more difficult case of four-particle systems, where one must work in higher-dimensional spaces. The analysis makes extensive use of kinematic rotations. copyright 1998 The American Physical Society

  3. A FRAMING OF FUTURE EUROPEAN PARLIAMENT ELECTIONS 2014 IN A SOCIAL MEDIA CONTEXT

    Directory of Open Access Journals (Sweden)

    Dorian Pocovnicu

    2013-12-01

    Full Text Available Communication in marketing has always been a continuous conceptual hybrid of input from various domains: marketing, P.R., communication, sociology. With the constant transformation of web 2.0. phenomenon, the demarcation lines between these domains and their influence has become more blured and difficult to pinpoint. As a result, specific research methods and theories have become adaptable instruments, laying the path for grounded theory approaches or new research methods. Framing theory, having as basis that the media focuses attention on certain events and then places them within a field of meaning, has shifted towards organisations, and further on, to institutions. Framing is a quality of communication that leads others to accept one meaning over another. Framing theory suggests that how something is presented (the “frame” influences the choices people make. In online communicative contexts, their own personal framings allows the communicative actors to make use of language and forethought so that specific embodiments of future evolutions may be depicted. In our case, we shall focus on the topic: European Parliament elections, which are to take place in 2014, and on the manner in which it has been framed in two online chat session with three MEPs. It is our intention to identify the framing techniques used, the framing links and the framing alignments.

  4. Star camera aspect system suitable for use in balloon experiments

    International Nuclear Information System (INIS)

    Hunter, S.D.; Baker, R.G.

    1985-01-01

    A balloon-borne experiment containing a star camera aspect system was designed, built, and flown. This system was designed to provide offset corrections to the magnetometer and inclinometer readings used to control an azimuth and elevation pointed experiment. The camera is controlled by a microprocessor, including commendable exposure and noise rejection threshold, as well as formatting the data for telemetry to the ground. As a background program, the microprocessor runs the aspect program to analyze a fraction of the pictures taken so that aspect information and offset corrections are available to the experiment in near real time. The analysis consists of pattern recognition of the star field with a star catalog in ROM memory and a least squares calculation. The performance of this system in ground based tests is described. It is part of the NASA/GSFC High Energy Gamma-Ray Balloon Instrument (2)

  5. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  6. Frame on frames: an annotated bibliography

    International Nuclear Information System (INIS)

    Wright, T.; Tsao, H.J.

    1983-01-01

    The success or failure of any sample survey of a finite population is largely dependent upon the condition and adequacy of the list or frame from which the probability sample is selected. Much of the published survey sampling related work has focused on the measurement of sampling errors and, more recently, on nonsampling errors to a lesser extent. Recent studies on data quality for various types of data collection systems have revealed that the extent of the nonsampling errors far exceeds that of the sampling errors in many cases. While much of this nonsampling error, which is difficult to measure, can be attributed to poor frames, relatively little effort or theoretical work has focused on this contribution to total error. The objective of this paper is to present an annotated bibliography on frames with the hope that it will bring together, for experimenters, a number of suggestions for action when sampling from imperfect frames and that more attention will be given to this area of survey methods research

  7. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  8. Stellar aberration correction and thermoelastic compensation of Swarm μASC attitude observations: A comment to the Express Letter “Mysterious misalignments between geomagnetic and stellar reference frames seen in CHAMP and Swarm satellite measurements”, by Stefan Maus

    DEFF Research Database (Denmark)

    Herceg, M.; Jørgensen, P. S.; Jørgensen, J. L.

    2017-01-01

    . However, comparison of the Inter Boresight Angle shows a relative attitude variation between the μASC Camera Head Units. These misalignments between Camera Head Units and a geomagnetic reference frame cannot be explained by incorrect aberration correction (as theorized by Maus). Herceg et al. found them...... and clean from any variation caused by thermoelastic effects....

  9. Uav Photogrammetric Solution Using a Raspberry pi Camera Module and Smart Devices: Test and Results

    Science.gov (United States)

    Piras, M.; Grasso, N.; Jabbar, A. Abdul

    2017-08-01

    Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second/resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some check points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.

  10. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  11. OVERVIEW OF THE ATACAMA COSMOLOGY TELESCOPE: RECEIVER, INSTRUMENTATION, AND TELESCOPE SYSTEMS

    International Nuclear Information System (INIS)

    Swetz, D. S.; Devlin, M. J.; Dicker, S. R.; Ade, P. A. R.; Amiri, M.; Battistelli, E. S.; Burger, B.; Halpern, M.; Hasselfield, M.; Appel, J. W.; Essinger-Hileman, T.; Fisher, R. P.; Fowler, J. W.; Hincks, A. D.; Jarosik, N.; Chervenak, J.; Doriese, W. B.; Hilton, G. C.; Irwin, K. D.; Duenner, R.

    2011-01-01

    The Atacama Cosmology Telescope was designed to measure small-scale anisotropies in the cosmic microwave background and detect galaxy clusters through the Sunyaev-Zel'dovich effect. The instrument is located on Cerro Toco in the Atacama Desert, at an altitude of 5190 m. A 6 m off-axis Gregorian telescope feeds a new type of cryogenic receiver, the Millimeter Bolometer Array Camera. The receiver features three 1000-element arrays of transition-edge sensor bolometers for observations at 148 GHz, 218 GHz, and 277 GHz. Each detector array is fed by free space millimeter-wave optics. Each frequency band has a field of view of approximately 22' x 26'. The telescope was commissioned in 2007 and has completed its third year of operations. We discuss the major components of the telescope, camera, and related systems, and summarize the instrument performance.

  12. Overview of the Atacama Cosmology Telescope: Receiver, Instrumentation, and Telescope Systems

    Science.gov (United States)

    Swetz, D. S.; Ade, P. A. R.; Amiri, M.; Appel, J. W.; Battistelli, E. S.; Burger, B.; Chervenak, J.; Devlin, M. J.; Dicker, S. R.; Doriese, W. B.; Dünner, R.; Essinger-Hileman, T.; Fisher, R. P.; Fowler, J. W.; Halpern, M.; Hasselfield, M.; Hilton, G. C.; Hincks, A. D.; Irwin, K. D.; Jarosik, N.; Kaul, M.; Klein, J.; Lau, J. M.; Limon, M.; Marriage, T. A.; Marsden, D.; Martocci, K.; Mauskopf, P.; Moseley, H.; Netterfield, C. B.; Niemack, M. D.; Nolta, M. R.; Page, L. A.; Parker, L.; Staggs, S. T.; Stryzak, O.; Switzer, E. R.; Thornton, R.; Tucker, C.; Wollack, E.; Zhao, Y.

    2011-06-01

    The Atacama Cosmology Telescope was designed to measure small-scale anisotropies in the cosmic microwave background and detect galaxy clusters through the Sunyaev-Zel'dovich effect. The instrument is located on Cerro Toco in the Atacama Desert, at an altitude of 5190 m. A 6 m off-axis Gregorian telescope feeds a new type of cryogenic receiver, the Millimeter Bolometer Array Camera. The receiver features three 1000-element arrays of transition-edge sensor bolometers for observations at 148 GHz, 218 GHz, and 277 GHz. Each detector array is fed by free space millimeter-wave optics. Each frequency band has a field of view of approximately 22' × 26'. The telescope was commissioned in 2007 and has completed its third year of operations. We discuss the major components of the telescope, camera, and related systems, and summarize the instrument performance.

  13. COMPARISON OF BACKGROUND SUBTRACTION, SOBEL, ADAPTIVE MOTION DETECTION, FRAME DIFFERENCES, AND ACCUMULATIVE DIFFERENCES IMAGES ON MOTION DETECTION

    Directory of Open Access Journals (Sweden)

    Dara Incam Ramadhan

    2018-02-01

    Full Text Available Nowadays, digital image processing is not only used to recognize motionless objects, but also used to recognize motions objects on video. One use of moving object recognition on video is to detect motion, which implementation can be used on security cameras. Various methods used to detect motion have been developed so that in this research compared some motion detection methods, namely Background Substraction, Adaptive Motion Detection, Sobel, Frame Differences and Accumulative Differences Images (ADI. Each method has a different level of accuracy. In the background substraction method, the result obtained 86.1% accuracy in the room and 88.3% outdoors. In the sobel method the result of motion detection depends on the lighting conditions of the room being supervised. When the room is in bright condition, the accuracy of the system decreases and when the room is dark, the accuracy of the system increases with an accuracy of 80%. In the adaptive motion detection method, motion can be detected with a condition in camera visibility there is no object that is easy to move. In the frame difference method, testing on RBG image using average computation with threshold of 35 gives the best value. In the ADI method, the result of accuracy in motion detection reached 95.12%.

  14. The influence of head frame distortions on stereotactic localization and targeting

    Energy Technology Data Exchange (ETDEWEB)

    Treuer, H; Hunsche, S; Hoevels, M; Luyken, K; Maarouf, M; Voges, J; Sturm, V [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, 50924 Cologne (Germany)

    2004-09-07

    A strong attachment of a stereotactic head frame to the patient's skull may cause distortions of the head frame. The aim of this work was to identify possible distortions of the head frame, to measure the degree of distortion occurring in clinical practice and to investigate its influence on stereotactic localization and targeting. A model to describe and quantify the distortion of the Riechert-Mundinger (RM) head frame was developed. Distortions were classified as (a) bending and (b) changes from the circular ring shape. Ring shape changes were derived from stereotactic CT scans and frame bending was determined from intraoperative stereotactic x-ray images of patients with implanted {sup 125}I-seeds acting as landmarks. From the examined patient data frame bending was determined to be 0.74 mm {+-} 0.32 mm and 1.30 mm in maximum. If a CT-localizer with a top ring is used, frame bending has no influence on stereotactic CT-localization. In stereotactic x-ray localization, frame bending leads to an overestimation of the z-coordinate by 0.37 mm {+-} 0.16 mm on average and by 0.65 mm in maximum. The accuracy of patient positioning in radiosurgery is not affected by frame bending. But in stereotactic surgery with an RM aiming bow trajectory displacements are expected. These displacements were estimated to be 0.36 mm {+-} 0.16 mm (max. 0.74 mm) at the target point and 0.65 mm {+-} 0.30 mm (max. 1.31 mm) at the entry point level. Changes from the circular ring shape are small and do not compromise the accuracy of stereotactic targeting and localization. The accuracy of CT-localization was found to be close to the resolution limit due to voxel size. Our findings for frame bending of the RM frame could be validated by statistical analysis and by comparison with an independent patient examination. The results depend on the stereotactic system and details of the localizers and instruments and also reflect our clinical practice. Therefore, a generalization is not possible

  15. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  16. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  17. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  18. Opto-mechanical design of the G-CLEF flexure control camera system

    Science.gov (United States)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  19. Behaviour of Strengthened RC Frames with Eccentric Steel Braced Frames

    Science.gov (United States)

    Kamanli, Mehmet; Unal, Alptug

    2017-10-01

    After devastating earthquakes in recent years, strengthening of reinforced concrete buildings became an important research topic. Reinforced concrete buildings can be strengthened by steel braced frames. These steel braced frames may be made of concentrically or eccentrically indicated in Turkish Earthquake Code 2007. In this study pushover analysis of the 1/3 scaled 1 reinforced concrete frame and 1/3 scaled 4 strengthened reinforced concrete frames with internal eccentric steel braced frames were conducted by SAP2000 program. According to the results of the analyses conducted, load-displacement curves of the specimens were compared and evaluated. Adding eccentric steel braces to the bare frame decreased the story drift, and significantly increased strength, stiffness and energy dissipation capacity. In this strengthening method lateral load carrying capacity, stiffness and dissipated energy of the structure can be increased.

  20. Space Infrared Telescope Facility (SIRTF) science instruments

    International Nuclear Information System (INIS)

    Ramos, R.; Hing, S.M.; Leidich, C.A.; Fazio, G.; Houck, J.R.

    1989-01-01

    Concepts of scientific instruments designed to perform infrared astronomical tasks such as imaging, photometry, and spectroscopy are discussed as part of the Space Infrared Telescope Facility (SIRTF) project under definition study at NASA/Ames Research Center. The instruments are: the multiband imaging photometer, the infrared array camera, and the infrared spectograph. SIRTF, a cryogenically cooled infrared telescope in the 1-meter range and wavelengths as short as 2.5 microns carrying multiple instruments with high sensitivity and low background performance, provides the capability to carry out basic astronomical investigations such as deep search for very distant protogalaxies, quasi-stellar objects, and missing mass; infrared emission from galaxies; star formation and the interstellar medium; and the composition and structure of the atmospheres of the outer planets in the solar sytem. 8 refs

  1. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  2. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  3. Design and development of the 2m resolution camera for ROCSAT-2

    Science.gov (United States)

    Uguen, Gilbert; Luquet, Philippe; Chassat, François

    2017-11-01

    EADS-Astrium has recently completed the development of a 2m-resolution camera, so-called RSI (Remote Sensing Instrument), for the small-satellite ROCSAT-2, which is the second component of the long-term space program of the Republic of China. The National Space Program Office of Taïwan selected EADS-Astrium as the Prime Contractor for the development of the spacecraft, including the bus and the main instrument RSI. The main challenges for the RSI development were: - to introduce innovative technologies in order to meet the high performance requirements while achieving the design simplicity necessary for the mission (low mass, low power) - to have a development approach and verification compatible with the very tight development schedule This paper describes the instrument design together with the development and verification logic that were implemented to successfully meet these objectives.

  4. Gated SPECT evaluation of left ventricular function using a CZT camera and a fast low-dose clinical protocol: comparison to cardiac magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Giorgetti, Assuero; Masci, Pier Giorgio; Marras, Gavino; Gimelli, Alessia; Genovesi, Dario; Lombardi, Massimo [Fondazione CNR/Regione Toscana ' ' G. Monasterio' ' , Pisa (Italy); Rustamova, Yasmine K. [Azerbaijan Medical University, Department of internal medicine Central Customs Hospital, Baku (Azerbaijan); Marzullo, Paolo [Istituto di Fisiologia Clinica del CNR, Pisa (Italy)

    2013-12-15

    CZT technology allows ultrafast low-dose myocardial scintigraphy but its accuracy in assessing left ventricular function is still to be defined. The study group comprised 55 patients (23 women, mean age 63 {+-} 9 years) referred for myocardial perfusion scintigraphy. The patients were studied at rest using a CZT camera (Discovery NM530c; GE Healthcare) and a low-dose {sup 99m}Tc-tetrofosmin clinical protocol (mean dose 264 {+-} 38 MBq). Gated SPECT imaging was performed as a 6-min list-mode acquisition, 15 min after radiotracer injection. Images were reformatted (8-frame to 16-frame) using Lister software on a Xeleris workstation (GE Healthcare) and then reconstructed with a dedicated iterative algorithm. Analysis was performed using Quantitative Gated SPECT (QGS) software. Within 2 weeks patients underwent cardiac magnetic resonance imaging (cMRI, 1.5-T unit CVi; GE Healthcare) using a 30-frame acquisition protocol and dedicated software for analysis (MASS 6.1; Medis). The ventricular volumes obtained with 8-frame QGS showed excellent correlations with the cMRI volumes (end-diastolic volume (EDV), r = 0.90; end-systolic volume (ESV), r = 0.94; p < 0.001). However, QGS significantly underestimated the ventricular volumes (mean differences: EDV, -39.5 {+-} 29 mL; ESV, -15.4 {+-} 22 mL; p < 0.001). Similarly, the ventricular volumes obtained with 16-frame QGS showed an excellent correlations with the cMRI volumes (EDV, r = 0.92; ESV, r = 0.95; p < 0.001) but with significant underestimations (mean differences: EDV, -33.2 {+-} 26 mL; ESV, -17.9 {+-} 20 mL; p < 0.001). Despite significantly lower values (47.9 {+-} 16 % vs. 51.2 {+-} 15 %, p < 0.008), 8-frame QGS mean ejection fraction (EF) was closely correlated with the cMRI values (r = 0.84, p < 0.001). The mean EF with 16-frame QGS showed the best correlation with the cMRI values (r = 0.91, p < 0.001) and was similar to the mean cMRI value (49.6 {+-} 16 %, p not significant). Regional analysis showed a good

  5. Camac interface for digitally recording infrared camera images

    International Nuclear Information System (INIS)

    Dyer, G.R.

    1986-01-01

    An instrument has been built to store the digital signals from a modified imaging infrared scanner directly in a digital memory. This procedure avoids the signal-to-noise degradation and dynamic range limitations associated with successive analog-to-digital and digital-to-analog conversions and the analog recording method normally used to store data from the scanner. This technique also allows digital data processing methods to be applied directly to recorded data and permits processing and image reconstruction to be done using either a mainframe or a microcomputer. If a suitable computer and CAMAC-based data collection system are already available, digital storage of up to 12 scanner images can be implemented for less than $1750 in materials cost. Each image is stored as a frame of 60 x 80 eight-bit pixels, with an acquisition rate of one frame every 16.7 ms. The number of frames stored is limited only by the available memory. Initially, data processing for this equipment was done on a VAX 11-780, but images may also be displayed on the screen of a microcomputer. Software for setting the displayed gray scale, generating contour plots and false-color displays, and subtracting one image from another (e.g., background suppression) has been developed for IBM-compatible personal computers

  6. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  7. Approximately dual frames in Hilbert spaces and applications to Gabor frames

    OpenAIRE

    Christensen, Ole; Laugesen, Richard S.

    2011-01-01

    Approximately dual frames are studied in the Hilbert space setting. Approximate duals are easier to construct than classical dual frames, and can be tailored to yield almost perfect reconstruction. Bounds on the deviation from perfect reconstruction are obtained for approximately dual frames constructed via perturbation theory. An alternative bound is derived for the rich class of Gabor frames, by using the Walnut representation of the frame operator to estimate the deviation from equality in...

  8. Framing the frame: How task goals determine the likelihood and direction of framing effects

    OpenAIRE

    Todd McElroy; John J. Seta

    2007-01-01

    We examined how the goal of a decision task influences the perceived positive, negative valence of the alternatives and thereby the likelihood and direction of framing effects. In Study 1 we manipulated the goal to increase, decrease or maintain the commodity in question and found that when the goal of the task was to increase the commodity, a framing effect consistent with those typically observed in the literature was found. When the goal was to decrease, a framing effect opposite to the ty...

  9. Novel driver method to improve ordinary CCD frame rate for high-speed imaging diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Tong-Ding, E-mail: snuohui@126.com; Li, Bin-Kang; Yang, Shao-Hua; Guo, Ming-An; Yan, Ming

    2016-06-21

    The use of ordinary Charge-coupled-Device (CCD) imagers for the analysis of fast physical phenomenon is restricted because of the low-speed performance resulting from their long output times. Even though the form of Intensified-CCD (ICCD), coupled with a gated image intensifier, has extended their use for high speed imaging, the deficiency remains to be solved that ICDD could record only one image in a single shot. This paper presents a novel driver method designed to significantly improve the ordinary interline CCD burst frame rate for high-speed photography. This method is based on the use of vertical registers as storage, so that a small number of additional frames comprised of reduced-spatial-resolution images obtained via a specific sampling operation can be buffered. Hence, the interval time of the received series of images is related to the exposure and vertical transfer times only and, thus, the burst frame rate can be increased significantly. A prototype camera based on this method is designed as part of this study, exhibiting a burst rate of up to 250,000 frames per second (fps) and a capacity to record three continuous images. This device exhibits a speed enhancement of approximately 16,000 times compared with the conventional speed, with a spatial resolution reduction of only 1/4.

  10. A trillion frames per second: the techniques and applications of light-in-flight photography.

    Science.gov (United States)

    Faccio, Daniele; Velten, Andreas

    2018-06-14

    Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.

  11. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  12. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    International Nuclear Information System (INIS)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S

    2016-01-01

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm"3) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  13. Behaviour of Strengthened RC Frames with Eccentric Steel Braced Frames

    Directory of Open Access Journals (Sweden)

    Kamanli Mehmet

    2017-01-01

    Full Text Available After devastating earthquakes in recent years, strengthening of reinforced concrete buildings became an important research topic. Reinforced concrete buildings can be strengthened by steel braced frames. These steel braced frames may be made of concentrically or eccentrically indicated in Turkish Earthquake Code 2007. In this study pushover analysis of the 1/3 scaled 1 reinforced concrete frame and 1/3 scaled 4 strengthened reinforced concrete frames with internal eccentric steel braced frames were conducted by SAP2000 program. According to the results of the analyses conducted, load-displacement curves of the specimens were compared and evaluated. Adding eccentric steel braces to the bare frame decreased the story drift, and significantly increased strength, stiffness and energy dissipation capacity. In this strengthening method lateral load carrying capacity, stiffness and dissipated energy of the structure can be increased.

  14. Mobile device-based optical instruments for agriculture

    Science.gov (United States)

    Sumriddetchkajorn, Sarun

    2013-05-01

    Realizing that a current smart-mobile device such as a cell phone and a tablet can be considered as a pocket-size computer embedded with a built-in digital camera, this paper reviews and demonstrates on how a mobile device can be specifically functioned as a portable optical instrument for agricultural applications. The paper highlights several mobile device-based optical instruments designed for searching small pests, measuring illumination level, analyzing spectrum of light, identifying nitrogen status in the rice field, estimating chlorine in water, and determining ripeness level of the fruit. They are suitable for individual use as well as for small and medium enterprises.

  15. Mass media image of selected instruments of economic develepment

    Directory of Open Access Journals (Sweden)

    Kruliš Ladislav

    2016-07-01

    Full Text Available The goal of this paper is twofold. Firstly, two instruments of economic development – investment incentives and cluster initiatives – were compared according to the frequency of their occurrence in selected mass media sources in the Czech Republic in the periods 2004-2005 and 2011-2012. Secondly, the mass media image of these two instruments of economic development was evaluated with respect to the frames deductively constructed from literature review. The findings pointed out a higher occurrence of the mass media articles/news dealing with investment incentives. These articles/news were, additionally, more controversial and covered a wider spectrum of frames. Politicians were a relatively more frequent type of actors who created the media message from the articles/news. On the contrary, the mass media articles/news concerning cluster initiatives typically created the frame of positive effects of clusters. The messages were told either by economic experts or by public authority representatives who were closely connected with cluster initiatives. Spatial origin of these messages was rather limited. The definitional vagueness, intangible and uncontroversial nature of cluster initiatives restrained their media appeal.

  16. Bio-inspired motion detection in an FPGA-based smart camera module

    International Nuclear Information System (INIS)

    Koehler, T; Roechter, F; Moeller, R; Lindemann, J P

    2009-01-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10 000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device

  17. Quantifying geological processes on Mars - Results of the high resolution stereo camera (HRSC) on Mars express

    NARCIS (Netherlands)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; De Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K. D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-01-01

    Abstract This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale

  18. Comparison of monthly nighttime cloud fraction products from MODIS and AIRS and ground-based camera over Manila Observatory (14.64N, 121.07E)

    Science.gov (United States)

    Gacal, G. F. B.; Lagrosas, N.

    2017-12-01

    Cloud detection nowadays is primarily achieved by the utilization of various sensors aboard satellites. These include MODIS Aqua, MODIS Terra, and AIRS with products that include nighttime cloud fraction. Ground-based instruments are, however, only secondary to these satellites when it comes to cloud detection. Nonetheless, these ground-based instruments (e.g., LIDARs, ceilometers, and sky-cameras) offer significant datasets about a particular region's cloud cover values. For nighttime operations of cloud detection instruments, satellite-based instruments are more reliably and prominently used than ground-based ones. Therefore if a ground-based instrument for nighttime operations is operated, it ought to produce reliable scientific datasets. The objective of this study is to do a comparison between the results of a nighttime ground-based instrument (sky-camera) and that of MODIS Aqua and MODIS Terra. A Canon Powershot A2300 is placed ontop of Manila Observatory (14.64N, 121.07E) and is configured to take images of the night sky at 5min intervals. To detect pixels with clouds, the pictures are converted to grayscale format. Thresholding technique is used to screen pixels with cloud and pixels without clouds. If the pixel value is greater than 17, it is considered as a cloud; otherwise, a noncloud (Gacal et al., 2016). This algorithm is applied to the data gathered from Oct 2015 to Oct 2016. A scatter plot between satellite cloud fraction in the area covering the area 14.2877N, 120.9869E, 14.7711N and 121.4539E and ground cloud cover is graphed to find the monthly correlation. During wet season (June - November), the satellite nighttime cloud fraction vs ground measured cloud cover produce an acceptable R2 (Aqua= 0.74, Terra= 0.71, AIRS= 0.76). However, during dry season, poor R2 values are obtained (AIRS= 0.39, Aqua & Terra = 0.01). The high correlation during wet season can be attributed to a high probability that the camera and satellite see the same clouds

  19. Multivariate wavelet frames

    CERN Document Server

    Skopina, Maria; Protasov, Vladimir

    2016-01-01

    This book presents a systematic study of multivariate wavelet frames with matrix dilation, in particular, orthogonal and bi-orthogonal bases, which are a special case of frames. Further, it provides algorithmic methods for the construction of dual and tight wavelet frames with a desirable approximation order, namely compactly supported wavelet frames, which are commonly required by engineers. It particularly focuses on methods of constructing them. Wavelet bases and frames are actively used in numerous applications such as audio and graphic signal processing, compression and transmission of information. They are especially useful in image recovery from incomplete observed data due to the redundancy of frame systems. The construction of multivariate wavelet frames, especially bases, with desirable properties remains a challenging problem as although a general scheme of construction is well known, its practical implementation in the multidimensional setting is difficult. Another important feature of wavelet is ...

  20. Design of the high resolution optical instrument for the Pleiades HR Earth observation satellites

    Science.gov (United States)

    Lamard, Jean-Luc; Gaudin-Delrieu, Catherine; Valentini, David; Renard, Christophe; Tournier, Thierry; Laherrere, Jean-Marc

    2017-11-01

    As part of its contribution to Earth observation from space, ALCATEL SPACE designed, built and tested the High Resolution cameras for the European intelligence satellites HELIOS I and II. Through these programmes, ALCATEL SPACE enjoys an international reputation. Its capability and experience in High Resolution instrumentation is recognised by the most customers. Coming after the SPOT program, it was decided to go ahead with the PLEIADES HR program. PLEIADES HR is the optical high resolution component of a larger optical and radar multi-sensors system : ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. ALCATEL SPACE has been entrusted by CNES with the development of the high resolution camera of the Earth observation satellites PLEIADES HR. The first optical satellite of the PLEIADES HR constellation will be launched in mid-2008, the second will follow in 2009. To minimize the development costs, a mini satellite approach has been selected, leading to a compact concept for the camera design. The paper describes the design and performance budgets of this novel high resolution and large field of view optical instrument with emphasis on the technological features. This new generation of camera represents a breakthrough in comparison with the previous SPOT cameras owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. Recent advances in detector technology, optical fabrication and electronics make it possible for the PLEIADES HR camera to achieve their image quality performance goals while staying within weight and size restrictions normally considered suitable only for much lower performance systems. This camera design delivers superior performance using an innovative low power, low mass, scalable architecture, which provides a versatile approach for a variety of imaging requirements and allows for a wide number of possibilities of accommodation with a mini

  1. Using XML and Java for Astronomical Instrumentation Control

    Science.gov (United States)

    Ames, Troy; Koons, Lisa; Sall, Ken; Warsaw, Craig

    2000-01-01

    Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). ]ML is used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, and communication mechanisms. Although the current effort is targeted for the High-resolution Airborne Wideband Camera, a first-light instrument of the Stratospheric Observatory for Infrared Astronomy, the framework is designed to be generic and extensible so that it can be applied to any instrument.

  2. System of technical vision for autonomous unmanned aerial vehicles

    Science.gov (United States)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  3. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  4. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  5. Frames and outer frames for Hilbert C^*-modules

    OpenAIRE

    Arambašić, Ljiljana; Bakić, Damir

    2015-01-01

    The goal of the present paper is to extend the theory of frames for countably generated Hilbert $C^*$-modules over arbitrary $C^*$-algebras. In investigating the non-unital case we introduce the concept of outer frame as a sequence in the multiplier module $M(X)$ that has the standard frame property when applied to elements of the ambient module $X$. Given a Hilbert $\\A$-module $X$, we prove that there is a bijective correspondence of the set of all adjointable surjections from the generalize...

  6. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    Science.gov (United States)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  7. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  8. Recent developments in X-ray and neutron small-angle scattering instrumentation and data analysis

    International Nuclear Information System (INIS)

    Schelten, J.

    1978-01-01

    The developments in instrumentation and data analysis that have occurred in the field of small-angle X-ray and neutron scattering since 1973 are reviewed. For X-rays, the cone camera collimation was invented, synchrotrons and storage rings were demonstrated to be intense sources of X-radiation, and one- and two-dimensional position-sensitive detectors were interfaced to cameras with both point and line collimation. For neutrons, the collimators and detectors on the Juelich and Grenoble machines were improved, new D11-type instruments were built or are under construction at several sites, double-crystal instruments were set up, and various new machines have been proposed. Significant progress in data analysis and evaluation has been made through application of mathematical techniques such as the use of spline functions, error minimization with constraints, and linear programming. Several special experiments, unusual in respect to the anisotropy of the scattering pattern, gravitational effects, moving scatterers, and dynamic fast time slicing, are discussed. (Auth.)

  9. Two dimensional spatial distortion correction algorithm for scintillation GAMMA cameras

    International Nuclear Information System (INIS)

    Chaney, R.; Gray, E.; Jih, F.; King, S.E.; Lim, C.B.

    1985-01-01

    Spatial distortion in an Anger gamma camera originates fundamentally from the discrete nature of scintillation light sampling with an array of PMT's. Historically digital distortion correction started with the method based on the distortion measurement by using 1-D slit pattern and the subsequent on-line bi-linear approximation with 64 x 64 look-up tables for X and Y. However, the X, Y distortions are inherently two-dimensional in nature, and thus the validity of this 1-D calibration method becomes questionable with the increasing distortion amplitude in association with the effort to get better spatial and energy resolutions. The authors have developed a new accurate 2-D correction algorithm. This method involves the steps of; data collection from 2-D orthogonal hole pattern, 2-D distortion vector measurement, 2-D Lagrangian polynomial interpolation, and transformation to X, Y ADC frame. The impact of numerical precision used in correction and the accuracy of bilinear approximation with varying look-up table size have been carefully examined through computer simulation by using measured single PMT light response function together with Anger positioning logic. Also the accuracy level of different order Lagrangian polynomial interpolations for correction table expansion from hole centroids were investigated. Detailed algorithm and computer simulation are presented along with camera test results

  10. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  11. On-Line High Dose-Rate Gamma Ray Irradiation Test of the CCD/CMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In this paper, test results of gamma ray irradiation to CCD/CMOS cameras are described. From the CAMS (containment atmospheric monitoring system) data of Fukushima Dai-ichi nuclear power plant station, we found out that the gamma ray dose-rate when the hydrogen explosion occurred in nuclear reactors 1{approx}3 is about 160 Gy/h. If assumed that the emergency response robot for the management of severe accident of the nuclear power plant has been sent into the reactor area to grasp the inside situation of reactor building and to take precautionary measures against releasing radioactive materials, the CCD/CMOS cameras, which are loaded with the robot, serve as eye of the emergency response robot. In the case of the Japanese Quince robot system, which was sent to carry out investigating the unit 2 reactor building refueling floor situation, 7 CCD/CMOS cameras are used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. In the preceding assumptions, a major problem which arises when dealing with CCD/CMOS cameras in the severe accident situations of the nuclear power plant is the presence of high dose-rate gamma irradiation fields. In the case of the DBA (design basis accident) situations of the nuclear power plant, in order to use a CCD/CMOS camera as an ad-hoc monitoring unit in the vicinity of high radioactivity structures and components of the nuclear reactor area, a robust survivability of this camera in such intense gamma-radiation fields therefore should be verified. The CCD/CMOS cameras of various types were gamma irradiated at a

  12. The Wide Field Imager instrument for Athena

    Science.gov (United States)

    Meidinger, Norbert; Barbera, Marco; Emberger, Valentin; Fürmetz, Maria; Manhart, Markus; Müller-Seidlitz, Johannes; Nandra, Kirpal; Plattner, Markus; Rau, Arne; Treberspurg, Wolfgang

    2017-08-01

    ESA's next large X-ray mission ATHENA is designed to address the Cosmic Vision science theme 'The Hot and Energetic Universe'. It will provide answers to the two key astrophysical questions how does ordinary matter assemble into the large-scale structures we see today and how do black holes grow and shape the Universe. The ATHENA spacecraft will be equipped with two focal plane cameras, a Wide Field Imager (WFI) and an X-ray Integral Field Unit (X-IFU). The WFI instrument is optimized for state-of-the-art resolution spectroscopy over a large field of view of 40 amin x 40 amin and high count rates up to and beyond 1 Crab source intensity. The cryogenic X-IFU camera is designed for high-spectral resolution imaging. Both cameras share alternately a mirror system based on silicon pore optics with a focal length of 12 m and large effective area of about 2 m2 at an energy of 1 keV. Although the mission is still in phase A, i.e. studying the feasibility and developing the necessary technology, the definition and development of the instrumentation made already significant progress. The herein described WFI focal plane camera covers the energy band from 0.2 keV to 15 keV with 450 μm thick fully depleted back-illuminated silicon active pixel sensors of DEPFET type. The spatial resolution will be provided by one million pixels, each with a size of 130 μm x 130 μm. The time resolution requirement for the WFI large detector array is 5 ms and for the WFI fast detector 80 μs. The large effective area of the mirror system will be completed by a high quantum efficiency above 90% for medium and higher energies. The status of the various WFI subsystems to achieve this performance will be described and recent changes will be explained here.

  13. TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope

    Science.gov (United States)

    Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.

    Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.

  14. On frame multiresolution analysis

    DEFF Research Database (Denmark)

    Christensen, Ole

    2003-01-01

    We use the freedom in frame multiresolution analysis to construct tight wavelet frames (even in the case where the refinable function does not generate a tight frame). In cases where a frame multiresolution does not lead to a construction of a wavelet frame we show how one can nevertheless...

  15. Pengaruh Adverse Selection dan Negative Framing pada Kecenderungan Eskalasi Komitmen

    Directory of Open Access Journals (Sweden)

    Gede Wira Kusuma

    2017-02-01

    Full Text Available Escalation of commitment is a decision to increase or expand the commitment to a project or a particular investment even though the investment project or indicate failure. This research has the objective to obtain empirical evidence of the effect of adverse selection and negative framing effect on the escalation of commitment tendency. Experimental design used of this research is 2 x 2 factorial design with the instrument in the form of cases. Participants in this research were Magister ofAccounting and Magister of Management students, as a proxy manager chosen by purposive sampling technique as much as 196 participants. This research uses of two ways ANOVA analysis techniques. This research proves that adverse selection and negative framing have an influence on the propensity of escalation of commitment.

  16. Multi-capability color night vision HD camera for defense, surveillance, and security

    Science.gov (United States)

    Pang, Francis; Powell, Gareth; Fereyre, Pierre

    2015-05-01

    e2v has developed a family of high performance cameras based on our next generation CMOS imagers that provide multiple features and capabilities to meet the range of challenging imaging applications in defense, surveillance, and security markets. Two resolution sizes are available: 1920x1080 with 5.3 μm pixels, and an ultra-low light level version at 1280x1024 with 10μm pixels. Each type is available in either monochrome or e2v's unique bayer pattern color version. The camera is well suited to accommodate many of the high demands for defense, surveillance, and security applications: compact form factor (SWAP+C), color night vision performance (down to 10-2 lux), ruggedized housing, Global Shutter, low read noise (<6e- in Global shutter mode and <2.5e- in Rolling shutter mode), 60 Hz frame rate, high QE especially in the enhanced NIR range (up to 1100nm). Other capabilities include active illumination and range gating. This paper will describe all the features of the sensor and the camera. It will be followed with a presentation of the latest test data with the current developments. Then, it will conclude with a description of how these features can be easily configured to meet many different applications. With this development, we can tune rather than create a full customization, making it more beneficial for many of our customers and their custom applications.

  17. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  18. German activities in optical space instrumentation

    Science.gov (United States)

    Hartmann, G.

    2018-04-01

    In the years of space exploration since the mid-sixties, a wide experience in optical space instrumentation has developed in Germany. This experience ranges from large telescopes in the 1 m and larger category with the accompanying focal plane detectors and spectrometers for all regimes of the electromagnetic spectrum (infrared, visible, ultraviolet, x-rays), to miniature cameras for cometary and planetary explorations. The technologies originally developed for space science. are now also utilized in the fields of earth observation and even optical telecommunication. The presentation will cover all these areas, with examples for specific technological or scientific highlights. Special emphasis will be given to the current state-of-the-art instrumentation technologies in scientific institutions and industry, and to the future perspective in approved and planned projects.

  19. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  20. Polyphonic pitch detection and instrument separation

    Science.gov (United States)

    Bay, Mert; Beauchamp, James W.

    2005-09-01

    An algorithm for polyphonic pitch detection and musical instrument separation is presented. Each instrument is represented as a time-varying harmonic series. Spectral information is obtained from a monaural input signal using a spectral peak tracking method. Fundamental frequencies (F0s) for each time frame are estimated from the spectral data using an Expectation Maximization (EM) algorithm with a Gaussian mixture model representing the harmonic series. The method first estimates the most predominant F0, suppresses its series in the input, and then the EM algorithm is run iteratively to estimate each next F0. Collisions between instrument harmonics, which frequently occur, are predicted from the estimated F0s, and the resulting corrupted harmonics are ignored. The amplitudes of these corrupted harmonics are replaced by harmonics taken from a library of spectral envelopes for different instruments, where the spectrum which most closely matches the important characteristics of each extracted spectrum is chosen. Finally, each voice is separately resynthesized by additive synthesis. This algorithm is demonstrated for a trio piece that consists of 3 different instruments.

  1. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  2. Framed School--Frame Factors, Frames and the Dynamics of Social Interaction in School

    Science.gov (United States)

    Persson, Anders

    2015-01-01

    This paper aims to show how the Goffman frame perspective can be used in an analysis of school and education and how it can be combined, in such analysis, with the frame factor perspective. The latter emphasizes factors that are determined outside the teaching process, while the former stresses how actors organize their experiences and define…

  3. Application of ultra-fast high-resolution gated-image intensifiers to laser fusion studies

    International Nuclear Information System (INIS)

    Lieber, A.J.; Benjamin, R.F.; Sutphin, H.D.; McCall, G.H.

    1975-01-01

    Gated-image intensifiers for fast framing have found high utility in laser-target interaction studies. X-ray pinhole camera photographs which can record asymmetries of laser-target interactions have been instrumental in further system design. High-resolution high-speed x-ray images of laser irradiated targets are formed using pinhole optics and electronically amplified by proximity focused channelplate intensifiers before being recorded on film. Spectral resolution is obtained by filtering. In these applications shutter duration is determined by source duration. Electronic gating serves to reduce background thereby enhancing signal-to-noise ratio. Cameras are used to view the self light of the interaction but may also be used for shadowgraphs. Sources for shadowgraphs may be sequenced to obtain a series of pictures with effective rates of 10 10 frame/s. Multiple aperatures have been used to obtain stereo x-ray views, yielding three dimensional information about the interactions. (author)

  4. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    International Nuclear Information System (INIS)

    Benitez, D; Gaydecki, P; Quek, S; Torres, V

    2007-01-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research

  5. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    Science.gov (United States)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  6. An overview of instrumentation for the Large Binocular Telescope

    Science.gov (United States)

    Wagner, R. Mark

    2012-09-01

    An overview of instrumentation for the Large Binocular Telescope (LBT) is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' x 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the left and right direct F/15 Gregorian foci incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 2000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCI), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at the left and right front bent F/15 Gregorian foci and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multiobject spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development that can utilize the full 23-m baseline of the LBT include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). LBTI is currently undergoing commissioning on the LBT and utilizing the installed adaptive secondary mirrors in both single- sided and two-sided beam combination modes. In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. Over the past four years the LBC pair, LUCI1, and MODS1 have been commissioned and are now scheduled for routine partner science observations. The delivery of both LUCI2 and MODS2 is anticipated before the end of 2012. The

  7. Evaluating the effectiveness of impact assessment instruments

    DEFF Research Database (Denmark)

    Cashmore, Matthew; Richardson, Tim; Hilding-Ryedvik, Tuija

    2010-01-01

    to sharpen effectiveness evaluation theory for impact assessment instruments this article critically examines the neglected issue of their political constitution. Analytical examples are used to concretely explore the nature and significance of the politicisation of impact assessment. It is argued......The central role of impact assessment instruments globally in policy integration initiatives has been cemented in recent years. Associated with this trend, but also reflecting political emphasis on greater accountability in certain policy sectors and a renewed focus on economic competitiveness...... that raising awareness about the political character of impact assessment instruments, in itself, is a vital step in advancing effectiveness evaluation theory. Broader theoretical lessons on the framing of evaluation research are also drawn from the political analysis. We conclude that, at least within...

  8. Safeguarding on-power fuelled reactors - instrumentation and techniques

    International Nuclear Information System (INIS)

    Waligura, A.; Konnov, Y.; Smith, R.M.; Head, D.A.

    1977-05-01

    Instrumentation and techniques applicable to safeguarding reactors that are fuelled on-power, particularly the CANDU type, have been developed. A demonstration is being carried out at the Douglas Point Nuclear Generating Station in Canada. Irradiated nuclear materials in certain areas - the reactor and spent fuel storage bays - are monitored using photographic and television cameras, and seals. Item accounting is applied by counting spent-fuel bundles during transfer from the reactor to the storage bay and by placing these spent-fuel bundles in a sealed enclosure. Provision is made for inspection and verification of the bundles before sealing. The reactor's power history is recorded by a Track-Etch power monitor. Redundancy is provided so that the failure of any single piece of equipment does not invalidate the entire safeguards system. Several safeguards instruments and devices have been developed and evaluated. These include a super-8-mm surveillance camera system, a television surveillance system, a spent-fuel bundle counter, a device to detect dummy fuel bundles, a cover for enclosing a stack of spent-fuel bundles, and a seal suitable for underwater installation and ultrasonic interrogation. (author)

  9. A new X-ray pinhole camera for energy dispersive X-ray fluorescence imaging with high-energy and high-spatial resolution

    Energy Technology Data Exchange (ETDEWEB)

    Romano, F.P., E-mail: romanop@lns.infn.it [IBAM, CNR, Via Biblioteca 4, 95124 Catania (Italy); INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Altana, C. [INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Dipartimento di Fisica e Astronomia, Università di Catania, Via S. Sofia 64, 95123 Catania (Italy); Cosentino, L.; Celona, L.; Gammino, S.; Mascali, D. [INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Pappalardo, L. [IBAM, CNR, Via Biblioteca 4, 95124 Catania (Italy); INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Rizzo, F. [INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Dipartimento di Fisica e Astronomia, Università di Catania, Via S. Sofia 64, 95123 Catania (Italy)

    2013-08-01

    A new X-ray pinhole camera for the Energy Dispersive X-ray Fluorescence (ED-XRF) imaging of materials with high-energy and high-spatial resolution, was designed and developed. It consists of a back-illuminated and deep depleted CCD detector (composed of 1024 × 1024 pixels with a lateral size of 13 μm) coupled to a 70 μm laser-drilled pinhole-collimator, positioned between the sample under analysis and the CCD. The X-ray pinhole camera works in a coaxial geometry allowing a wide range of magnification values. The characteristic X-ray fluorescence is induced on the samples by irradiation with an external X-ray tube working at a maximum power of 100 W (50 kV and 2 mA operating conditions). The spectroscopic capabilities of the X-ray pinhole camera were accurately investigated. Energy response and energy calibration of the CCD detector were determined by irradiating pure target-materials emitting characteristic X-rays in the energy working-domain of the system (between 3 keV and 30 keV). Measurements were performed by using a multi-frame acquisition in single-photon counting. The characteristic X-ray spectra were obtained by an automated processing of the acquired images. The energy resolution measured at the Fe–Kα line is 157 eV. The use of the X-ray pinhole camera for the 2D resolved elemental analysis was investigated by using reference-patterns of different materials and geometries. The possibility of the elemental mapping of samples up to an area of 3 × 3 cm{sup 2} was demonstrated. Finally, the spatial resolution of the pinhole camera was measured by analyzing the profile function of a sharp-edge. The spatial resolution determined at the magnification values of 3.2 × and 0.8 × (used as testing values) is about 90 μm and 190 μm respectively. - Highlights: • We developed an X-ray pinhole camera for the 2D X-ray fluorescence imaging. • X-ray spectra are obtained by a multi-frame acquisition in single photon mode. • The energy resolution in the X

  10. A charged-particle manipulator utilizing a co-axial tube electrodynamic trap with an integrated camera

    International Nuclear Information System (INIS)

    Jiang, L; Pau, S; Whitten, W B

    2011-01-01

    A charged-particle manipulator was designed and fabricated with an integrated imaging camera allowing real-time in-situ monitoring of trapped particle motion even when the trap device is under motion or rotation. The trap device was made of two co-axial electrically conductive tubes with diameters of 5.5 mm and 7 mm for the inner tube and outer tube, respectively; the imaging camera with its optical fiber bundle was integrated within the tubular trap device to realize a single instrument functioning as a manipulator. Motion of suspended microparticles of 3 μm to 50 μm in diameter can be monitored using the integrated camera regardless of the trap device orientations. This manipulator provides capability of controlled manipulation of trapped particles by tuning the operating conditions while monitoring the feedback of real-time particle motion. Imaging of suspended particles was not interrupted while the manipulator was translated and/or rotated. This integrated manipulator can be used for charged particle transport and repositioning.