WorldWideScience

Sample records for vidicon camera system

  1. Pseudo real-time coded aperture imaging system with intensified vidicon cameras

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.

    1977-01-01

    A coded image displayed on a TV monitor was used to directly reconstruct a decoded image. Both the coded and the decoded images were viewed with intensified vidicon cameras. The coded aperture was a 15-element nonredundant pinhole array. The coding and decoding were accomplished simultaneously during the scanning of a single 16-msec TV frame

  2. Dual beam vidicon digitizer

    International Nuclear Information System (INIS)

    Evans, T.L.

    1976-01-01

    A vidicon waveform digitizer which can simultaneously digitize two independent signals has been developed. Either transient or repetitive waveforms can be digitized with this system. A dual beam oscilloscope is used as the signal input device. The light from the oscilloscope traces is optically coupled to a television camera, where the signals are temporarily stored prior to digitizing

  3. Extreme Ultraviolet Solar Images Televised In-Flight with a Rocket-Borne SEC Vidicon System.

    Science.gov (United States)

    Tousey, R; Limansky, I

    1972-05-01

    A TV image of the entire sun while an importance 2N solar flare was in progress was recorded in the extreme ultraviolet (XUV) radiation band 171-630 A and transmitted to ground from an Aerobee-150 rocket on 4 November 1969 using S-band telemetry. The camera tube was a Westinghouse Electric Corporation SEC vidicon, with its fiber optic faceplate coated with an XUV to visible conversion layer of p-quaterphenyl. The XUV passband was produced by three 1000-A thick aluminum filters in series together with the platinized reflecting surface of the off-axis paraboloid that imaged the sun. A number of images were recorded with integration times between 1/30 see and 2 sec. Reconstruction of pictures was enhanced by combining several to reduce the noise.

  4. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  5. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  6. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  7. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  8. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  9. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  10. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  11. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  12. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  13. FPS-vidicon television camras for ultrafast-scan data acquisition

    International Nuclear Information System (INIS)

    Noel, B.W.; Yates, G.J.

    1980-06-01

    Two ultrafast-scan ( 500 TV lines per picture height with a corresponding dynamic range (to light input) of more than 100. The cameras use the unique properties of FPS vidicons and specially designed electronics to achieve their performance levels and versatility. The advantages and disadvantages of FPS vidicons and of antimony trisulfide and silicon target materials in such applications are discussed in detail. All of the electronics circuits are discussed. The sweep generator designs are treated at length because they are the key to the cameras' versatility. Emphasis is placed on remotely controllable analog and digital sweep generators. The latter is a complete CAMAC-compatible subsystem containing a 16-function master arithmetic logic unit. Pulsed and cw methods of obtaining transfer characteristics are described and compared. The effects of generation rates, tube types, and target types on the resolution, determined from contrast-transfer-function curves, are discussed. Several applications are described, including neutron TV pinhole, TREAT, and barium-release experiments

  14. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  15. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  16. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  17. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  18. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  19. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  20. Real-time extraction of bubble chamber tracks using a single vidicon

    International Nuclear Information System (INIS)

    Roos, C.E.

    1978-01-01

    Bubble Chamber pictures show many undesired tracks and background in addition to the tracks of the desired significant event. Settles et al. have described a technique for optical tagging of an event by adding a darkfield photograph taken before significant bubble growth to a later brightfield photograph. The authors describe a system to cancel out all picture detail except for the wanted tracks by using a single vidicon tube as the storage device. In the first exposure, polarized light is imaged on the vidicon after passing through a Ronchi grating placed at a focal plane. Thus half of the target is exposed in a series of vertical stripes. The second exposure uses light polarized orthogonally to the first exposure and is deflected after passing through the Ronchi grating so as to expose the previously occluded stripes on the target. The target is then scanned orthogonally to the stripes; by subtracting the picture contained in one set of stripes from that contained in the other set, only the differences between the two images remains. A simulation was conducted using continuously presented background of one polarization and background plus tracks of the other polarization. The test showed that the added tracks were easily resolved, even though they were not readily discernible by visual inspection prior to subtraction. (Auth.)

  1. Quality assessment of gamma camera systems

    International Nuclear Information System (INIS)

    Kindler, M.

    1985-01-01

    There are methods and equipment in nuclear medical diagnostics that allow selective visualisation of the functioning of organs or organ systems, using radioactive substances for labelling and demonstration of metabolic processes. Following a previous contribution on fundamentals and systems components of a gamma camera system, the article in hand deals with the quality characteristics of such a system and with practical quality control and its significance for clinical applications. [de

  2. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    Villacorta, Edmundo V.

    1997-01-01

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  3. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  4. A real-time networked camera system : a scheduled distributed camera system reduces the latency

    NARCIS (Netherlands)

    Karatoy, H.

    2012-01-01

    This report presents the results of a Real-time Networked Camera System, com-missioned by the SAN Group in TU/e. Distributed Systems are motivated by two reasons, the first reason is the physical environment as a requirement and the second reason is to provide a better Quality of Service (QoS). This

  5. Notes on the IMACON 500 streak camera system

    International Nuclear Information System (INIS)

    Clendenin, J.E.

    1985-01-01

    The notes provided are intended to supplement the instruction manual for the IMACON 500 streak camera system. The notes cover the streak analyzer, instructions for timing the streak camera, and calibration

  6. Optical camera system for radiation field

    International Nuclear Information System (INIS)

    Maki, Koichi; Senoo, Makoto; Takahashi, Fuminobu; Shibata, Keiichiro; Honda, Takuro.

    1995-01-01

    An infrared-ray camera comprises a transmitting filter used exclusively for infrared-rays at a specific wavelength, such as far infrared-rays and a lens used exclusively for infrared rays. An infrared ray emitter-incorporated photoelectric image converter comprising an infrared ray emitting device, a focusing lens and a semiconductor image pick-up plate is disposed at a place of low gamma-ray dose rate. Infrared rays emitted from an objective member are passed through the lens system of the camera, and real images are formed by way of the filter. They are transferred by image fibers, introduced to the photoelectric image converter and focused on the image pick-up plate by the image-forming lens. Further, they are converted into electric signals and introduced to a display and monitored. With such a constitution, an optical material used exclusively for infrared rays, for example, ZnSe can be used for the lens system and the optical transmission system. Accordingly, it can be used in a radiation field of high gamma ray dose rate around the periphery of the reactor container. (I.N.)

  7. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  8. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  9. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  10. Camera System MTF: combining optic with detector

    Science.gov (United States)

    Andersen, Torben B.; Granger, Zachary A.

    2017-08-01

    MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.

  11. Permanent automatic recalibration system for scintillation camera

    International Nuclear Information System (INIS)

    Auphan, Michel.

    1974-01-01

    A permanent automatic recalibration system for a scintillation camera, of the type consisting chiefly of a collimator if necessary, a scintillator, a light guide and a network of n photomultipliers coupled to a display system, is described. It uses a device to form a single reference light signal common to all the photomultiplication lines, integrated to these latter and associated with a periodic calibration control generator. By means of associated circuits governed by the control generator the gain in each line is brought to and/or maintained at a value between fixed upper and lower limits. Steps are taken so that any gain variation in a given line is adjusted with respect to the reference light signal common to all the lines. The light signal falls preferably in the same part of the spectrum as the scintillations formed in the scintillator [fr

  12. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  13. LAMOST CCD camera-control system based on RTS2

    Science.gov (United States)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  14. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  15. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  16. Camera systems for crash and hyge testing

    Science.gov (United States)

    Schreppers, Frederik

    1995-05-01

    Since the beginning of the use of high speed cameras for crash and hyge- testing substantial changements have taken place. Both the high speed cameras and the electronic control equipment are more sophisticated nowadays. With regard to high speed equipment, a short historical retrospective view will show that concerning high speed cameras, the improvements are mainly concentrated in design details, where as the electronic control equipment has taken full advantage of the rapid progress in electronic and computer technology in the course of the last decades. Nowadays many companies and institutes involved in crash and hyge-testing wish to perform this testing, as far as possible, as an automatic computer controlled routine in order to maintain and improve security and quality. By means of several in practice realize solutions, it will be shown how their requirements could be met.

  17. CCD camera system for use with a streamer chamber

    International Nuclear Information System (INIS)

    Angius, S.A.; Au, R.; Crawley, G.C.; Djalali, C.; Fox, R.; Maier, M.; Ogilvie, C.A.; Molen, A. van der; Westfall, G.D.; Tickle, R.S.

    1988-01-01

    A system based on three charge-coupled-device (CCD) cameras is described here. It has been used to acquire images from a streamer chamber and consists of three identical subsystems, one for each camera. Each subsystem contains an optical lens, CCD camera head, camera controller, an interface between the CCD and a microprocessor, and a link to a minicomputer for data recording and on-line analysis. Image analysis techniques have been developed to enhance the quality of the particle tracks. Some steps have been made to automatically identify tracks and reconstruct the event. (orig.)

  18. An integrated port camera and display system for laparoscopy.

    Science.gov (United States)

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  19. Whole body scan system based on γ camera

    International Nuclear Information System (INIS)

    Ma Tianyu; Jin Yongjie

    2001-01-01

    Most existing domestic γ cameras can not perform whole body scan protocol, which is of important use in clinic. The authors designed a set of whole body scan system, which is made up of a scan bed, an ISA interface card controlling the scan bed and the data acquisition software based on a data acquisition and image processing system for γ cameras. The image was obtained in clinical experiment, and the authors think it meets the need of clinical diagnosis. Application of this system in γ cameras can provide whole body scan function at low cost

  20. Control system for several rotating mirror camera synchronization operation

    Science.gov (United States)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  1. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  2. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  3. BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS

    Directory of Open Access Journals (Sweden)

    H. Meißner

    2017-08-01

    Full Text Available UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  4. Applications of a shadow camera system for energy meteorology

    Science.gov (United States)

    Kuhn, Pascal; Wilbert, Stefan; Prahl, Christoph; Garsche, Dominik; Schüler, David; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Blanc, Philippe; Pitz-Paal, Robert

    2018-02-01

    Downward-facing shadow cameras might play a major role in future energy meteorology. Shadow cameras directly image shadows on the ground from an elevated position. They are used to validate other systems (e.g. all-sky imager based nowcasting systems, cloud speed sensors or satellite forecasts) and can potentially provide short term forecasts for solar power plants. Such forecasts are needed for electricity grids with high penetrations of renewable energy and can help to optimize plant operations. In this publication, two key applications of shadow cameras are briefly presented.

  5. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  6. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  7. Gamma camera image processing and graphical analysis mutual software system

    International Nuclear Information System (INIS)

    Wang Zhiqian; Chen Yongming; Ding Ailian; Ling Zhiye; Jin Yongjie

    1992-01-01

    GCCS gamma camera image processing and graphical analysis system is a special mutual software system. It is mainly used to analyse various patient data acquired from gamma camera. This system is used on IBM PC, PC/XT or PC/AT. It consists of several parts: system management, data management, device management, program package and user programs. The system provides two kinds of user interfaces: command menu and command characters. It is easy to change and enlarge this system because it is best modularized. The user programs include almost all the clinical protocols used now

  8. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    Science.gov (United States)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic

  9. ACCURACY POTENTIAL AND APPLICATIONS OF MIDAS AERIAL OBLIQUE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    M. Madani

    2012-07-01

    Full Text Available Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm and (50 mm/50 mm were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining

  10. NUKAB system use with the PICKER DYNA CAMERA II

    International Nuclear Information System (INIS)

    Collet, H.; Faurous, P.; Lehn, A.; Suquet, P.

    Present-day data processing units connected to scintillation gamma cameras can make use of cabled programme or recorded programme systems. The NUKAB system calls on the latter technique. The central element of the data processing unit, connected to the PICKER DYNA CAMERA II output, consists of a DIGITAL PDP 8E computer with 12-bit technological words. The use of a 12-bit technological format restricts the possibilities of digitalisation, 64x64 images representing the practical limit. However the NUKAB system appears well suited to the processing of data from gamma cameras at present in service. The addition of output terminals of the tracing panel type should widen the possibilities of the system. It seems that the 64x64 format is not a handicap in view of the resolution power of the detectors [fr

  11. The multi-camera optical surveillance system (MOS)

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.; Richter, B.; Gaertner, K.J.; Laszlo, G.; Neumann, G.

    1991-01-01

    The transition from film camera to video surveillance systems, in particular the implementation of high capacity multi-camera video systems, results in a large increase in the amount of recorded scenes. Consequently, there is a substantial increase in the manpower requirements for review. Moreover, modern microprocessor controlled equipment facilitates the collection of additional data associated with each scene. Both the scene and the annotated information have to be evaluated by the inspector. The design of video surveillance systems for safeguards necessarily has to account for both appropriate recording and reviewing techniques. An aspect of principal importance is that the video information is stored on tape. Under the German Support Programme to the Agency a technical concept has been developed which aims at optimizing the capabilities of a multi-camera optical surveillance (MOS) system including the reviewing technique. This concept is presented in the following paper including a discussion of reviewing and reliability

  12. Delay line clipping in a scintillation camera system

    International Nuclear Information System (INIS)

    Hatch, K.F.

    1979-01-01

    The present invention provides a novel base line restoring circuit and a novel delay line clipping circuit in a scintillation camera system. Single and double delay line clipped signal waveforms are generated for increasing the operational frequency and fidelity of data detection of the camera system by base line distortion such as undershooting, overshooting, and capacitive build-up. The camera system includes a set of photomultiplier tubes and associated amplifiers which generate sequences of pulses. These pulses are pulse-height analyzed for detecting a scintillation having an energy level which falls within a predetermined energy range. Data pulses are combined to provide coordinates and energy of photopeak events. The amplifiers are biassed out of saturation over all ranges of pulse energy level and count rate. Single delay line clipping circuitry is provided for narrowing the pulse width of the decaying electrical data pulses which increase operating speed without the occurrence of data loss. (JTA)

  13. Users' guide to the positron camera DDP516 computer system

    International Nuclear Information System (INIS)

    Bracher, B.H.

    1979-08-01

    This publication is a guide to the operation, use and software for a DDP516 computer system provided by the Data Handling Group primarily for the development of a Positron Camera. The various sections of the publication fall roughly into three parts. (1) Sections forming the Operators Guide cover the basic operation of the machine, system utilities and back-up procedures. Copies of these sections are kept in a 'Nyrex' folder with the computer. (2) Sections referring to the software written particularly for Positron Camera Data Collection describe the system in outline and lead to details of file formats and program source files. (3) The remainder of the guide, describes General-Purpose Software. Much of this has been written over some years by various members of the Data Handling Group, and is available for use in other applications besides the positron camera. (UK)

  14. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  15. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  16. Development and evaluation of a Gamma Camera tuning system

    International Nuclear Information System (INIS)

    Arista Romeu, E. J.; Diaz Garcia, A.; Osorio Deliz, J. F.

    2015-01-01

    Correct operation of conventional analogue Gamma Cameras implies a good conformation of the position signals that correspond to a specific photo-peak of the radionuclide of interest. In order to achieve this goal the energy spectrum from each photo multiplier tube (PMT) has to be set within the same energy window. For this reason a reliable tuning system is an important part of all gamma cameras processing systems. In this work is being tested and evaluated a new prototype of tuning card that was developed and setting up for this purpose. The hardware and software of the circuit allow the regulation if each PMT high voltage. By this means a proper gain control for each of them is accomplished. The Tuning Card prototype was simulated in a virtual model and its satisfactory operation was proven in a Siemens Orbiter Gamma Camera. (Author)

  17. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  18. Design of a Day/Night Star Camera System

    Science.gov (United States)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  19. Target-Tracking Camera for a Metrology System

    Science.gov (United States)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  20. Design of microcontroller based system for automation of streak camera

    International Nuclear Information System (INIS)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-01-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  1. Design of microcontroller based system for automation of streak camera.

    Science.gov (United States)

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  2. Design of microcontroller based system for automation of streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P. [Laser Electronics Support Division, RRCAT, Indore 452013 (India)

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  3. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  4. The design of software system of intelligentized γ-camera

    International Nuclear Information System (INIS)

    Zhao Shujun; Li Suxiao; Wang Jing

    2006-01-01

    The software system of γ-camera adopts visualizing and interactive human-computer interface, collecting and displaying the data of patients in real time. Through a series of dealing with the collected data then it put out the medical record in Chinese. This system also can retrieve and backup the data of patients. Besides, it can assist the doctor to diagnose the illness by applying the clinical quantitative analysis function of the system. (authors)

  5. AMIE Camera System on board SMART-1

    Science.gov (United States)

    Josset, J. L.; Beauvivre, S.; Amie Team

    The Advanced Moon micro-Imager Experiment AMIE on board ESA SMART-1 the first European mission to the Moon launched on 27th September 2003 is an imaging system with scientific technical and public outreach oriented objectives The science objectives are to image the Lunar Poles permanent shadow areas ice deposit eternal light crater rims ancient Lunar Non-mare volcanism local spectro-photometry and physical state of the lunar sur-face and to map high latitudes regions south mainly at far side South Pole Aitken basin The technical objectives are to perform a laserlink experiment detec-tion of laser beam emitted by ESA Tenerife ground station flight demonstration of new technologies and on-board autonomy navigation The public outreach and educational objectives are to promote planetary exploration We present the AMIE instrument and perfomances with respect to the first results

  6. Programmable electronic system for analog and digital gamma cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Omeu, E. J.

    2013-01-01

    At present the use of analog and digital gamma cameras is continuously increasing in developing countries. Many of them still largely rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. For this reason worldwide there are different medical equipment manufacturing companies engaged into partial or total Gamma Cameras modernization. Nevertheless in several occasions acquisition prices are not affordable for developing countries. This work describes the basic features of a programmable electronic system that allows improving acquisitions functions and processing of analog and digital gamma cameras. This system is based on an electronic board for the acquisition and digitization of nuclear pulses which have been generated by gamma camera detector. It comprises a hardware interface with PC and the associated software to fully signal processing. Signal shaping and image processing are included. The extensive use of reference tables in the processing and signal imaging software allowed the optimization of the processing speed. Time design and system cost were also decreased. (Author)

  7. Fully integrated digital GAMMA camera-computer system

    International Nuclear Information System (INIS)

    Berger, H.J.; Eisner, R.L.; Gober, A.; Plankey, M.; Fajman, W.

    1985-01-01

    Although most of the new non-nuclear imaging techniques are fully digital, there has been a reluctance in nuclear medicine to abandon traditional analog planar imaging in favor of digital acquisition and display. The authors evaluated a prototype digital camera system (GE STARCAM) in which all of the analog acquisition components are replaced by microprocessor controls and digital circuitry. To compare the relative effects of acquisition matrix size on image quality and to ascertain whether digital techniques could be used in place of analog imaging, Tc-99m bone scans were obtained on this digital system and on a comparable analog camera in 10 patients. The dedicated computer is used for camera setup including definition of the energy window, spatial energy correction, and spatial distortion correction. The display monitor, which is used for patient positioning and image analysis, is 512/sup 2/ non-interlaced, allowing high resolution imaging. Data acquisition and processing can be performed simultaneously. Thus, the development of a fully integrated digital camera-computer system with optimized display should allow routine utilization of non-analog studies in nuclear medicine and the ultimate establishment of fully digital nuclear imaging laboratories

  8. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  9. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  10. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    Science.gov (United States)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  11. RELATIVE AND ABSOLUTE CALIBRATION OF A MULTIHEAD CAMERA SYSTEM WITH OBLIQUE AND NADIR LOOKING CAMERAS FOR A UAS

    Directory of Open Access Journals (Sweden)

    F. Niemeyer

    2013-08-01

    Full Text Available Numerous unmanned aerial systems (UAS are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis“ software and will give an overview of the results and experiences of test flights.

  12. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1994-01-01

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified

  13. Gamma camera investigations using an on-line computer system

    International Nuclear Information System (INIS)

    Vikterloef, K.J.; Beckman, K.-W.; Berne, E.; Liljenfors, B.

    1974-01-01

    A computer system for use with a gamma camera has been developed by Oerebro Regional Hospital and Nukab AB using a PDP 8/e with a 12K core memory connected to a Selektronik gamma camera. It is possible to register, without loss, pictures of high (5kcps) pulse frequency, two separate channels with identical coordinates, fast dynamic functions down to 5 pictures/second, and to perform statistical smoothing and subtraction of two separate pictures. Experience has shown these possibilities to be so valuable that one has difficulty in thinking of a scanning system without them. This applies not only to sophisticated investigations, e.g. dual isotope registration, but also in conventional scanning for avoiding false positive interpretations and increasing the precision. It is possible at relatively low cost to add a dosage planning system. (JIW)

  14. Oblique Multi-Camera Systems - Orientation and Dense Matching Issues

    Science.gov (United States)

    Rupnik, E.; Nex, F.; Remondino, F.

    2014-03-01

    The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.). The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  15. Star camera aspect system suitable for use in balloon experiments

    International Nuclear Information System (INIS)

    Hunter, S.D.; Baker, R.G.

    1985-01-01

    A balloon-borne experiment containing a star camera aspect system was designed, built, and flown. This system was designed to provide offset corrections to the magnetometer and inclinometer readings used to control an azimuth and elevation pointed experiment. The camera is controlled by a microprocessor, including commendable exposure and noise rejection threshold, as well as formatting the data for telemetry to the ground. As a background program, the microprocessor runs the aspect program to analyze a fraction of the pictures taken so that aspect information and offset corrections are available to the experiment in near real time. The analysis consists of pattern recognition of the star field with a star catalog in ROM memory and a least squares calculation. The performance of this system in ground based tests is described. It is part of the NASA/GSFC High Energy Gamma-Ray Balloon Instrument (2)

  16. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  17. A new high-speed IR camera system

    Science.gov (United States)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  18. Mechanically assisted liquid lens zoom system for mobile phone cameras

    Science.gov (United States)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.

    2006-08-01

    Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).

  19. A simple data loss model for positron camera systems

    International Nuclear Information System (INIS)

    Eriksson, L.; Dahlbom, M.

    1994-01-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system

  20. Distributing functionality in the Drift Scan Camera System

    International Nuclear Information System (INIS)

    Nicinski, T.; Constanta-Fanourakis, P.; MacKinnon, B.; Petravick, D.; Pluquet, C.; Rechenmacher, R.; Sergey, G.

    1993-11-01

    The Drift Scan Camera (DSC) System acquires image data from a CCD camera. The DSC is divided physically into two subsystems which are tightly coupled to each other. Functionality is split between these two subsystems: the front-end performs data acquisition while the host subsystem performs near real-time data analysis and control. Yet, through the use of backplane-based Remote Procedure Calls, the feel of one coherent system is preserved. Observers can control data acquisition, archiving to tape, and other functions from the host, but, the front-end can accept these same commands and operate independently. The DSC meets the needs for such robustness and cost-effective computing

  1. System Architecture of the Dark Energy Survey Camera Readout Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Theresa; /FERMILAB; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; /Barcelona, IFAE; Chappa, Steve; /Fermilab; de Vicente, Juan; /Madrid, CIEMAT; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; /Fermilab; Martinez, Gustavo; /Madrid, CIEMAT; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  2. Biomedical image acquisition system using a gamma camera

    International Nuclear Information System (INIS)

    Jara B, A.T.; Sevillano, J.; Del Carpio S, J.A.

    2003-01-01

    A gamma camera images PC acquisition board has been developed. The digital system has been described using VHDL and has been synthesized and implemented in a Altera Max7128S CPLD and two PALs 16L8. The use of programmable-logic technologies has afforded a higher scale integration and a reduction of the digital delays and also has allowed us to modify and bring up to date the entire digital design easily. (orig.)

  3. A luminescence imaging system based on a CCD camera

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Markey, B.G.

    1997-01-01

    Stimulated luminescence arising from naturally occurring minerals is likely to be spatially heterogeneous. Standard luminescence detection systems are unable to resolve this variability. Several research groups have attempted to use imaging photon detectors, or image intensifiers linked...... to photographic systems, in order to obtain spatially resolved data. However, the former option is extremely expensive and it is difficult to obtain quantitative data from the latter. This paper describes the use of a CCD camera for imaging both thermoluminescence and optically stimulated luminescence. The system...

  4. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  5. SFR test fixture for hemispherical and hyperhemispherical camera systems

    Science.gov (United States)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  6. Upgrading of analogue gamma cameras with PC based computer system

    International Nuclear Information System (INIS)

    Fidler, V.; Prepadnik, M.

    2002-01-01

    Full text: Dedicated nuclear medicine computers for acquisition and processing of images from analogue gamma cameras in developing countries are in many cases faulty and technologically obsolete. The aim of the upgrading project of International Atomic Energy Agency (IAEA) was to support the development of the PC based computer system which would cost 5.000 $ in total. Several research institutions from different countries (China, Cuba, India and Slovenia) were financially supported in this development. The basic demands for the system were: one acquisition card an ISA bus, image resolution up to 256x256, SVGA graphics, low count loss at high count rates, standard acquisition and clinical protocols incorporated in PIP (Portable Image Processing), on-line energy and uniformity correction, graphic printing and networking. The most functionally stable acquisition system tested on several international workshops and university clinics was the Slovenian one with a complete set of acquisition and clinical protocols, transfer of scintigraphic data from acquisition card to PC through PORT, count loss less than 1 % at count rate of 120 kc/s, improvement of integral uniformity index by a factor of 3-5 times, reporting, networking and archiving solutions for simple MS network or server oriented network systems (NT server, etc). More than 300 gamma cameras in 52 countries were digitized and put in the routine work. The project of upgrading the analogue gamma cameras yielded a high promotion of nuclear medicine in the developing countries by replacing the old computer systems, improving the technological knowledge of end users on workshops and training courses and lowering the maintenance cost of the departments. (author)

  7. Pothole Detection System Using a Black-box Camera

    Directory of Open Access Journals (Sweden)

    Youngtae Jo

    2015-11-01

    Full Text Available Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  8. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  9. A quality control atlas for scintillation camera systems

    International Nuclear Information System (INIS)

    Busemann Sokole, E.; Graham, L.S.; Todd-Pokropek, A.; Wegst, A.; Robilotta, C.C.

    2002-01-01

    Full text: The accurate interpretation of quality control and clinical nuclear medicine image data is coupled to an understanding of image patterns and quantitative results. Understanding is gained by learning from different examples, and knowledge of underlying principles of image production. An Atlas of examples has been created to assist with interpreting quality control tests and recognizing artifacts in clinical examples. The project was initiated and supported by the International Atomic Energy Agency (IAEA). The Atlas was developed and written by Busemann Sokole from image examples submitted from nuclear medicine users from around the world. The descriptive text was written in a consistent format to accompany each image or image set. Each example in the atlas finally consisted of the images; a brief description of the data acquisition, radionuclide/radiopharmaceutical, specific circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, where appropriate, giving guidelines for follow-up strategies and trouble shooting; and occasional literature references. Hardcopy images required digitizing into JPEG format for inclusion into a digital document. Where possible, an example was contained on one page. The atlas was reviewed by an international group of experts. A total of about 250 examples were compiled into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. Subtle loss of image quality may be difficult to detect. SPECT examples, therefore, include simulations demonstrating effects of deterioration in camera performance (e.g. center-of-rotation offset, non-uniformity) or suboptimal clinical performance. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artifacts due to system malfunction, and artifacts due to environmental situations. Some image patterns are

  10. Usability of a Wearable Camera System for Dementia Family Caregivers

    Directory of Open Access Journals (Sweden)

    Judith T. Matthews

    2015-01-01

    Full Text Available Health care providers typically rely on family caregivers (CG of persons with dementia (PWD to describe difficult behaviors manifested by their underlying disease. Although invaluable, such reports may be selective or biased during brief medical encounters. Our team explored the usability of a wearable camera system with 9 caregiving dyads (CGs: 3 males, 6 females, 67.00 ± 14.95 years; PWDs: 2 males, 7 females, 80.00 ± 3.81 years, MMSE 17.33 ± 8.86 who recorded 79 salient events over a combined total of 140 hours of data capture, from 3 to 7 days of wear per CG. Prior to using the system, CGs assessed its benefits to be worth the invasion of privacy; post-wear privacy concerns did not differ significantly. CGs rated the system easy to learn to use, although cumbersome and obtrusive. Few negative reactions by PWDs were reported or evident in resulting video. Our findings suggest that CGs can and will wear a camera system to reveal their daily caregiving challenges to health care providers.

  11. IR-camera methods for automotive brake system studies

    Science.gov (United States)

    Dinwiddie, Ralph B.; Lee, Kwangjin

    1998-03-01

    Automotive brake systems are energy conversion devices that convert kinetic energy into heat energy. Several mechanisms, mostly related to noise and vibration problems, can occur during brake operation and are often related to non-uniform temperature distribution on the brake disk. These problems are of significant cost to the industry and are a quality concern to automotive companies and brake system vendors. One such problem is thermo-elastic instabilities in brake system. During the occurrence of these instabilities several localized hot spots will form around the circumferential direction of the brake disk. The temperature distribution and the time dependence of these hot spots, a critical factor in analyzing this problem and in developing a fundamental understanding of this phenomenon, were recorded. Other modes of non-uniform temperature distributions which include hot banding and extreme localized heating were also observed. All of these modes of non-uniform temperature distributions were observed on automotive brake systems using a high speed IR camera operating in snap-shot mode. The camera was synchronized with the rotation of the brake disk so that the time evolution of hot regions could be studied. This paper discusses the experimental approach in detail.

  12. Energy independent uniformity improvement for gamma camera systems

    International Nuclear Information System (INIS)

    Lange, K.

    1979-01-01

    In a gamma camera system having an array of photomultiplier tubes for detecting scintillation events and preamplifiers connecting each tube to a weighting resistor matrix for determining the position coordinates of the events, means are provided for summing the signals from all photomultipliers to obtain the total energy of each event. In one embodiment, at least two different percentages of the summed voltage are developed and used to change the gain of the preamplifiers as a function of total energy when energies exceed specific levels to thereby obtain more accurate correspondence between the true coordinates of the event and its coordinates in a display

  13. Development and application of an automatic system for measuring the laser camera

    International Nuclear Information System (INIS)

    Feng Shuli; Peng Mingchen; Li Kuncheng

    2004-01-01

    Objective: To provide an automatic system for measuring imaging quality of laser camera, and to make an automatic measurement and analysis system. Methods: On the special imaging workstation (SGI 540), the procedure was written by using Matlab language. An automatic measurement and analysis system of imaging quality for laser camera was developed and made according to the imaging quality measurement standard of laser camera of International Engineer Commission (IEC). The measurement system used the theories of digital signal processing, and was based on the characteristics of digital images, as well as put the automatic measurement and analysis of laser camera into practice by the affiliated sample pictures of the laser camera. Results: All the parameters of imaging quality of laser camera, including H-D and MTF curve, low and middle and high resolution of optical density, all kinds of geometry distort, maximum and minimum density, as well as the dynamic range of gray scale, could be measured by this system. The system was applied for measuring the laser cameras in 20 hospitals in Beijing. The measuring results showed that the system could provide objective and quantitative data, and could accurately evaluate the imaging quality of laser camera, as well as correct the results made by manual measurement based on the affiliated sample pictures of the laser camera. Conclusion: The automatic measuring system of laser camera is an effective and objective tool for testing the quality of the laser camera, and the system makes a foundation for the future research

  14. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  15. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  16. An electronic pan/tilt/magnify and rotate camera system

    International Nuclear Information System (INIS)

    Zimmermann, S.; Martin, H.L.

    1992-01-01

    A new camera system has been developed for omnidirectional image-viewing applications that provides pan, tilt, magnify, and rotational orientation within a hemispherical field of view (FOV) without any moving parts. The imaging device is based on the fact that the image from a fish-eye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high-speed electronic circuitry. More specifically, an incoming fish-eye image from any image acquisition source is captured in the memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment. As a result, this device can accomplish the functions of pan, tilt, rotation, and magnification throughout a hemispherical FOV without the need for any mechanical devices. Multiple images, each with different image magnifications and pan-tilt-rotate parameters, can be obtained from a single camera

  17. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  18. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  19. Digital subtraction angiography with an Isocon camera system: clinical applications

    International Nuclear Information System (INIS)

    Barbaric, Z.L.; Gomes, A.S.; Deckard, M.E.; Nelson, R.S.; Moler, C.L.

    1984-01-01

    A new imaging system for digital subtraction angiography (DSA) was evaluated in 30 clinical studies. The image receptor is a 25 X 25 cm, 12 par gadolinium oxysulfate rare-earth screen whose light output is focused to a low-light-level Isocon camera. The video signal is digitized and processed by an image-array processor containing 31 512 X 512 memories 8 bits deep. In most patients, intraarterial DSA studies were done in conjunction with conventional arteriography. In these arterial studies, images adequate to make a specific diagnosis were obtained using half the radiation dose and half the amount of contrast material needed for conventional angiography. In eight intravenous studies performed either to identify renal artery stenosis or for evaluation of congenital heart anomalies, the images were diagnostic but objectionably noisy

  20. Gamma camera system with improved means for correcting nonuniformity

    International Nuclear Information System (INIS)

    Lange, K.; Jeppesen, J.

    1979-01-01

    In a gamma camera system, means are provided for correcting nonuniformity or lack of correspondence between the positions of scintillations and their calculated and displayed by x-y coordinates. In an accumulation mode, pulse counts corresponding with scintillations in various areas of the radiation field are stored in memory locations corresponding with their locations in the radiation field. A uniform radiation source is presented to the detectors during the accumulation is interrupted at which time other locations have fewer counts in them. In the run mode, counts are stored in corresponding locations of a memory and these counts are compared continuously with those stored in the accumulation mode. Means are provided for injecting a number of counts during the run mode proportional to the difference between the counts accumulated during the accumulation mode in a given area increment and the counts that should have been obtained from a uniform source

  1. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  2. Development and characterization of a CCD camera system for use on six-inch manipulator systems

    International Nuclear Information System (INIS)

    Logory, L.M.; Bell, P.M.; Conder, A.D.; Lee, F.D.

    1996-01-01

    The Lawrence Livermore National Laboratory has designed, constructed, and fielded a compact CCD camera system for use on the Six Inch Manipulator (SIM) at the Nova laser facility. The camera system has been designed to directly replace the 35 mm film packages on all active SIM-based diagnostics. The unit's electronic package is constructed for small size and high thermal conductivity using proprietary printed circuit board technology, thus reducing the size of the overall camera and improving its performance when operated within the vacuum environment of the Nova laser target chamber. The camera has been calibrated and found to yield a linear response, with superior dynamic range and signal-to-noise levels as compared to T-Max 3200 optic film, while providing real-time access to the data. Limiting factors related to fielding such devices on Nova will be discussed, in addition to planned improvements of the current design

  3. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  4. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  5. The readout system for the ArTeMis camera

    Science.gov (United States)

    Doumayrou, E.; Lortholary, M.; Dumaye, L.; Hamon, G.

    2014-07-01

    During ArTeMiS observations at the APEX telescope (Chajnantor, Chile), 5760 bolometric pixels from 20 arrays at 300mK, corresponding to 3 submillimeter focal planes at 450μm, 350μm and 200μm, have to be read out simultaneously at 40Hz. The read out system, made of electronics and software, is the full chain from the cryostat to the telescope. The readout electronics consists of cryogenic buffers at 4K (NABU), based on CMOS technology, and of warm electronic acquisition systems called BOLERO. The bolometric signal given by each pixel has to be amplified, sampled, converted, time stamped and formatted in data packets by the BOLERO electronics. The time stamping is obtained by the decoding of an IRIG-B signal given by APEX and is key to ensure the synchronization of the data with the telescope. Specifically developed for ArTeMiS, BOLERO is an assembly of analogue and digital FPGA boards connected directly on the top of the cryostat. Two detectors arrays (18*16 pixels), one NABU and one BOLERO interconnected by ribbon cables constitute the unit of the electronic architecture of ArTeMiS. In total, the 20 detectors for the tree focal planes are read by 10 BOLEROs. The software is working on a Linux operating system, it runs on 2 back-end computers (called BEAR) which are small and robust PCs with solid state disks. They gather the 10 BOLEROs data fluxes, and reconstruct the focal planes images. When the telescope scans the sky, the acquisitions are triggered thanks to a specific network protocol. This interface with APEX enables to synchronize the acquisition with the observations on sky: the time stamped data packets are sent during the scans to the APEX software that builds the observation FITS files. A graphical user interface enables the setting of the camera and the real time display of the focal plane images, which is essential in laboratory and commissioning phases. The software is a set of C++, Labview and Python, the qualities of which are respectively used

  6. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    Science.gov (United States)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  7. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  8. RAMI analysis for ITER radial X-ray camera system

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Shijun, E-mail: sjqin@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Hu, Liqun; Chen, Kaiyun [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Barnsley, Robin; Sirinelli, Antoine [ITER Organization, Route Vinon sur Verdon, CS 90046, 13067, St. Paul lez Durance, Cedex (France); Song, Yuntao; Lu, Kun; Yao, Damao; Chen, Yebin; Li, Shi; Cao, Hongrui; Yu, Hong; Sheng, Xiuli [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China)

    2016-11-15

    Highlights: • The functional analysis of the ITER RXC system was performed. • A failure modes, effects and criticality analysis of the ITER RXC system was performed. • The reliability and availability of the ITER RXC system and its main functions were calculated. • The ITER RAMI approach was applied to the ITER RXC system for technical risk control in the preliminary design phase. - Abstract: ITER is the first international experimental nuclear fusion device. In the project, the RAMI approach (reliability, availability, maintainability and inspectability) has been adopted for technical risk control to mitigate all the possible failure of components in preparation for operation and maintenance. RAMI analysis of the ITER Radial X-ray Camera diagnostic (RXC) system during preliminary design phase was required, which insures the system with a very high performance to measure the X-ray emission and research the MHD of plasma with high accuracy on the ITER machine. A functional breakdown was prepared in a bottom-up approach, resulting in the system being divided into 3 main functions, 6 intermediate functions and 28 basic functions which are described using the IDEFØ method. Reliability block diagrams (RBDs) were prepared to calculate the reliability and availability of each function under assumption of operating conditions and failure data. Initial and expected scenarios were analyzed to define risk-mitigation actions. The initial availability of RXC system was 92.93%, while after optimization the expected availability was 95.23% over 11,520 h (approx. 16 months) which corresponds to ITER typical operation cycle. A Failure Modes, Effects and Criticality Analysis (FMECA) was performed to the system initial risk. Criticality charts highlight the risks of the different failure modes with regard to the probability of their occurrence and impact on operations. There are 28 risks for the initial state, including 8 major risks. No major risk remains after taking into

  9. Video monitoring system for enriched uranium casting furnaces

    International Nuclear Information System (INIS)

    Turner, P.C.

    1978-03-01

    A closed-circuit television (CCTV) system was developed to upgrade the remote-viewing capability on two oralloy (highly enriched uranium) casting furnaces in the Y-12 Plant. A silicon vidicon CCTV camera with a remotely controlled lens and infrared filtering was provided to yield a good-quality video presentation of the furnace crucible as the oralloy material is heated from 25 to 1300 0 C. Existing tube-type CCTV monochrome monitors were replaced with solid-state monitors to increase the system reliability

  10. Single camera photogrammetry system for EEG electrode identification and localization.

    Science.gov (United States)

    Baysal, Uğur; Sengül, Gökhan

    2010-04-01

    In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.

  11. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    International Nuclear Information System (INIS)

    WERRY, S.M.

    2000-01-01

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151

  12. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  13. Design and realization of an AEC&AGC system for the CCD aerial camera

    Science.gov (United States)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  14. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  15. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  16. A television/still camera with common optical system for reactor inspection

    International Nuclear Information System (INIS)

    Hughes, G.; McBane, P.

    1976-01-01

    One of the problems of reactor inspection is to obtain permanent high quality records. Video recordings provide a record of poor quality but known content. Still cameras can be used but the frame content is not predictable. Efforts have been made to combine T.V. viewing to align a still camera but a simple combination does not provide the same frame size. The necessity to preset the still camera controls severely restricts the flexibility of operation. A camera has, therefore, been designed which allows a search operation using the T.V. system. When an anomaly is found the still camera controls can be remotely set, an exact record obtained and the search operation continued without removal from the reactor. An application of this camera in the environment of the blanket gas region above the sodium region in PFR at 150 0 C is described

  17. Camera System Deployment for Speeding Control in Australia

    Directory of Open Access Journals (Sweden)

    Zuhair Ebrahim

    2014-12-01

    Full Text Available In Australia, the Auditor-General plays the role of checking on system fiscal efficiency, performance and effective communications between safety professionals and the public road users. The focus of this paper is to evaluate the possibility of public approval of the information that is to be released, e.g. camera strategic initiatives assessed by through mail-out questionnaires. Two visual-and- policy related attributes were investigated in these questionnaires. Each attribute had 5 initiatives. A multi-logistic regression is performed on the approval level of the drivers for the strategic initiative of running a speed-awareness course. This initiative is determined to be statistically significant using independent variables age, years of experience, status, gender, and the driver environment. Our analysis shows that the driver environment/background is found to be a significant independent variable for approving speed awareness courses. The road users from non-industrial areas are more likely to approve the idea of speed awareness courses than road users from industrial areas. They also welcome tougher demerit rules and the police enforcement. Our study suggests the speed awareness course, an educational initiative, should incorporate the tougher demerit rules to change the repetitive offender's driving behaviour. It is foreseeable that once these drivers are enrolled into the course, safer driving practices would be achieved for mitigating dangers, risk and trauma as the result of speeding. Our study may benefit professionals involved with improving traffic safety such as those in Asia, Africa, the Middle East and the Arab gulf countries particularly the kingdom of Saudi Arabia where a high number of fatalities and serious injuries involved speeding. Our study confirms that positive, transparent and satisfying initiative should be executed with care to maintain sustainable and safer roads for enhancing national partnership between road users

  18. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  19. DC drive system for cine/pulse cameras

    Science.gov (United States)

    Gerlach, R. H.; Sharpsteen, J. T.; Solheim, C. D.; Stoap, L. J.

    1977-01-01

    Camera-drive functions are separated mechanically into two groups which are driven by two separate dc brushless motors. First motor, a 90 deg stepper, drives rotating shutter; second electronically commutated motor drives claw and film transport. Shutter is made of one piece but has two openings for slow and fast exposures.

  20. Towards the Influence of a CAR Windshield on Depth Calculation with a Stereo Camera System

    Science.gov (United States)

    Hanel, A.; Hoegner, L.; Stilla, U.

    2016-06-01

    Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.

  1. Inexpensive camera systems for detecting martens, fishers, and other animals: guidelines for use and standardization.

    Science.gov (United States)

    Lawrence L.C. Jones; Martin G. Raphael

    1993-01-01

    Inexpensive camera systems have been successfully used to detect the occurrence of martens, fishers, and other wildlife species. The use of cameras is becoming widespread, and we give suggestions for standardizing techniques so that comparisons of data can occur across the geographic range of the target species. Details are given on equipment needs, setting up the...

  2. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  3. Development of a coded aperture fuel motion diagnostics system for the ACPR (UPGRADE)

    International Nuclear Information System (INIS)

    Kelly, J.G.; Stalker, K.T.

    1979-01-01

    As part of Sandia Laboratories' program to study simulated core disruptive accidents in reactor safety research, a fuel motion detection system based on coded aperture imaging is being developed for the Annular Core Pulsed Reactor (ACPR). Although fuel motion has been observed at the TREAT by the fast neutron hodoscope and with a Vidicon pinhole camera technique, the coded aperture system offers a potential for lower cost, higher spatial resolution, three dimensional imaging, and higher frame rates at lower fluences than either of the other techniques

  4. Localization and Mapping Using a Non-Central Catadioptric Camera System

    Science.gov (United States)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  5. Design of gamma camera data acquisition system based on PCI9810

    International Nuclear Information System (INIS)

    Zhao Yuanyuan; Zhao Shujun; Liu Yang

    2004-01-01

    This paper describe the design of gamma camera's data acquisition system, which is based on PCI9810 data acquisition card of ADLink Technology Inc. The main function of PCI9810 and the program of data acquisition system are described. (authors)

  6. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  7. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  8. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  9. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  10. Development and Performance of Bechtel Nevada's Nine-Frame Camera System

    International Nuclear Information System (INIS)

    S. A. Baker; M. J. Griffith; J. L. Tybo

    2002-01-01

    Bechtel Nevada, Los Alamos Operations, has developed a high-speed, nine-frame camera system that records a sequence from a changing or dynamic scene. The system incorporates an electrostatic image tube with custom gating and deflection electrodes. The framing tube is shuttered with high-speed gating electronics, yielding frame rates of up to 5MHz. Dynamic scenes are lens-coupled to the camera, which contains a single photocathode gated on and off to control each exposure time. Deflection plates and drive electronics move the frames to different locations on the framing tube output. A single charge-coupled device (CCD) camera then records the phosphor image of all nine frames. This paper discusses setup techniques to optimize system performance. It examines two alternate philosophies for system configuration and respective performance results. We also present performance metrics for system evaluation, experimental results, and applications to four-frame cameras

  11. The infrared camera system on the HL-2A tokamak device

    International Nuclear Information System (INIS)

    Li Wei; Lu Jie; Yi Ping

    2009-04-01

    In order to measure and analyze the heat flux on the divertor plate under different discharge conditions, an infrared camera diagnostic system for HL-2A Device has been developed. The infrared camera diagnostic system mainly includes the thermograph with uncooled microbolometer Focal Plane Array detector, Zinc Selenide window, Firewire Fiber Repeaters, 50 m long fibers, magnetic shielding box and data acquisition card. The diagnostic system can provide high spatial resolution, long distance control and real-time data acquisition. Based on the surface temperature measured by the infrared camera diagnostic system and the knowledge of the copper thermal properties, the heat flux can be derived by heat conduct model. The infrared camera diagnostic system and preliminary results are presented in details. (authors)

  12. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    Science.gov (United States)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  13. Quality assurance procedures for the IAEA Department of Safeguards Twin Minolta Camera Surveillance System

    International Nuclear Information System (INIS)

    Geoffrion, R.R.; Bussolini, P.L.; Stark, W.A.; Ahlquist, A.J.; Sanders, K.E.; Rubinstein, G.

    1986-01-01

    The International Atomic Energy Agency (IAEA) safeguards program provides assurance to the international community that nations are complying with nuclear safeguards treaties. In one aspect of the program, the Department of Safeguards has developed a twin Minolta camera photo surveillance systems program to assure itself and the international community that material handling is accomplished according to safeguards treaty regulations. The camera systems are positioned in strategic locations in facilities such that objective evidence can be obtained for material transactions. The films are then processed, reviewed, and used to substantiate the conclusions that nuclear material has not been diverted. Procedures have been developed to document and aid in: 1) the performance of activities involved in positioning of the camera system; 2) installation of the systems; 3) review and use of the film taken from the cameras

  14. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  15. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  16. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  17. Gamma camera system with composite solid state detector

    International Nuclear Information System (INIS)

    Gerber, M.S.; Miller, D.W.

    1977-01-01

    A composite solid-state detector is described for utilization within gamma cameras. The detector's formed of an array of detector crystals, the opposed surfaces of each of which are formed incorporating an impedance-derived configuration for determining one coordinate of the location of discrete impinging photons upon the detector. A combined read-out for all detectors within the composite array is achieved through a row and column interconnection of the impedance configurations. Utilizing the read-outs for respective sides of the discrete crystals, a resultant time-constant characteristic for the composite detector crystal array remains essentially that of individual crystal detectors

  18. Automated Meteor Detection by All-Sky Digital Camera Systems

    Czech Academy of Sciences Publication Activity Database

    Suk, Tomáš; Šimberová, Stanislava

    2017-01-01

    Roč. 120, č. 3 (2017), s. 189-215 ISSN 0167-9295 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985815 ; RVO:67985556 Keywords : meteor detection * autonomous fireball observatories * fish-eye camera * Hough transformation Subject RIV: IN - Informatics, Computer Science; BN - Astronomy, Celestial Mechanics, Astrophysics (ASU-R) OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8); Astronomy (including astrophysics,space science) (ASU-R) Impact factor: 0.875, year: 2016

  19. Superficial vessel reconstruction with a multiview camera system

    Science.gov (United States)

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  20. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  1. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  2. Detection of unmanned aerial vehicles using a visible camera system.

    Science.gov (United States)

    Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C

    2017-01-20

    Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.

  3. Report on the Radiation Effects Testing of the Infrared and Optical Transition Radiation Camera Systems

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-04-20

    Presented in this report are the results tests performed at Argonne National Lab in collaboration with Los Alamos National Lab to assess the reliability of the critical 99Mo production facility beam monitoring diagnostics. The main components of the beam monitoring systems are two cameras that will be exposed to radiation during accelerator operation. The purpose of this test is to assess the reliability of the cameras and related optical components when exposed to operational radiation levels. Both X-ray and neutron radiation could potentially damage camera electronics as well as the optical components such as lenses and windows. This report covers results of the testing of component reliability when exposed to X-ray radiation. With the information from this study we provide recommendations for implementing protective measures for the camera systems in order to minimize the occurrence of radiation-induced failure within a ten month production run cycle.

  4. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  5. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  6. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  7. Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Kovácsik, Ákos, E-mail: kovacsik.akos@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Lampert, Máté, E-mail: lampert.mate@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Un Nam, Yong, E-mail: yunam@nfri.re.kr [NFRI, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon 305-806 (Korea, Republic of); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2015-01-11

    A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

  8. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. P. Kersting

    2012-07-01

    Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  9. New Method for the Calibration of Multi-Camera Mobile Mapping Systems

    Science.gov (United States)

    Kersting, A. P.; Habib, A.; Rau, J.

    2012-07-01

    Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  10. Self-luminous event photography with the Marco M-4 image converter camera system

    International Nuclear Information System (INIS)

    Meyer, T.O.

    1980-02-01

    The camera system is shown to be applicable to selfluminous events, such as the flasher-gap-enhanced shock waves which are depicted. Successive photographs of the detonation wavefront progressing across disc of high explosive (PBX 9404) are used for the determination of detonation velocity. Time intervals between film exposures for the four individual camera heads are easily measured and may extend from a few nanoseconds to ten milliseconds

  11. Deflection control system for prestressed concrete bridges by CCD camera. CCD camera ni yoru prestressed concrete kyo no tawami kanri system

    Energy Technology Data Exchange (ETDEWEB)

    Noda, Y.; Nakayama, Y.; Arai, T. (Kawada Construction Co. Ltd., Tokyo (Japan))

    1994-03-15

    For the long-span prestressed concrete bridge (continuous box girder and cable stayed bridge), the design and construction control becomes increasingly complicated as construction proceeds because of its cyclic works. This paper describes the method and operation of an automatic levelling module using CCD camera and the experimental results by this system. For this automatic levelling system, the altitude can be automatically measured by measuring the center location of gravity of the target on the bridge surface using CCD camera. The present deflection control system developed compares the measured value by the automatic levelling system with the design value obtained by the design calculation system, and manages them. From the real-time continuous measurement for the long term, in which the CCD camera was set on the bridge surface, it was found that the stable measurement accuracy can be obtained. Successful application of this system demonstrates that the system is an effective and efficient construction aid. 11 refs., 19 figs., 1 tab.

  12. Marking system for scintillation camera and computer, and its clinical application

    Energy Technology Data Exchange (ETDEWEB)

    Narabayashi, I [Kinki Univ., Higashi-Osaka, Osaka (Japan); Ito, K; Yoshida, S; Yamaguchi, H; Kahata, S

    1976-03-01

    In routine clinical studies, we have noted that in some cases anatomical marks have been transferred to scintigrams through the scintillation camera. It was thought that radioactive point sources could be used as anatomical marks. However, this method resulted in an ill-defined area of brightness. The marking equipment for the scintillation camera used in this study consisted of a linear potentiometer and a sine-cosine potentiometer. We designed a marking method for a data processing system using the marking equipment for a scintillation camera. The data processing system of a scintillation camera is composed of EDR-4000 (8K core memories), MT recorder, CRT display, and graphic display unit and it is connected by on-line to the scintillation camera. A marking program was made in order to record marking addresses on a processed image. Using this soft program and graphpen of the graphic display unit, we transferred marks to processed image by points and lines. Subtraction scintigraphy using /sup 198/Au-colloid and /sup 75/Se-selenomethionine was performed on cases of pancreas carcinoma. After marking addresses were recorded on a processed image by this marking method, the signals from the scintillation camera were fed into the input controller. Then these signals with marking points were transferred from the computer to the MT recorder. Subtraction scintigraphy by this system made it possible to examine each picture with /sup 198/Au-colloid of /sup 75/Se-selenomethionine at different times.

  13. Compton camera study for high efficiency SPECT and benchmark with Anger system

    Science.gov (United States)

    Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.

    2017-12-01

    Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application

  14. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    Science.gov (United States)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain

  15. Camera Concepts for the Advanced Gamma-Ray Imaging System (AGIS)

    Science.gov (United States)

    Nepomuk Otte, Adam

    2009-05-01

    The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. The incorporation of trigger electronics and signal digitization into the camera are under study. Given the size of AGIS, the camera must be reliable, robust, and cost effective. We are investigating several directions that include innovative technologies such as Geiger-mode avalanche-photodiodes as a possible detector and switched capacitor arrays for the digitization.

  16. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC; Little, William A.; /MMR-Technologies, Mountain View, CA; Powers, Jacob R; Schindler, Rafe H.; /SLAC; Spektor, Sam; /MMR-Technologies, Mountain View, CA

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results from a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)

  17. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  18. Gamma camera correction system and method for using the same

    International Nuclear Information System (INIS)

    Inbar, D.; Gafni, G.; Grimberg, E.; Bialick, K.; Koren, J.

    1986-01-01

    A gamma camera is described which consists of: (a) a detector head that includes photodetectors for producing output signals in response to radiation stimuli which are emitted by a radiation field and which interact with the detector head and produce an event; (b) signal processing circuitry responsive to the output signals of the photodetectors for producing a sum signal that is a measure of the total energy of the event; (c) an energy discriminator having a relatively wide window for comparison with the sum signal; (d) the signal processing circuitry including coordinate computation circuitry for operating on the output signals, and calculating an X,Y coordinate of an event when the sum signal lies within the window of the energy discriminator; (e) an energy correction table containing spatially dependent energy windows for producing a validation signal if the total energy of an event lies within the window associated with the X,Y coordinates of the event; (f) the signal processing circuitry including a dislocation correction table containing spatially dependent correction factors for converting the X,Y coordinates of an event to relocated coordinates in accordance with correction factors determined by the X,Y coordinates; (g) a digital memory for storing a map of the radiation field; and (h) means for recording an event at its relocated coordinates in the memory if the energy correction table produces a validation signal

  19. Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware

    Directory of Open Access Journals (Sweden)

    Y.-W. Kang

    2007-12-01

    Full Text Available We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512, KAF-1602E(1536×1024, KAF-3200E(2184×1472 made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.

  20. Design and performance of an acquisition and control system for a positron camera with novel detectors

    International Nuclear Information System (INIS)

    Symonds-Tayler, J.R.N.; Reader, A.J.; Flower, M.A.

    1996-01-01

    A Sun-based data acquisition and control (DAQ) system has been designed for PETRRA, a whole-body positron camera using large-area BaF 2 -TMAE detectors. The DAQ system uses a high-speed digital I/O card (S16D) installed on the S-bus of a SPARC10 and a specially-designed Positron Camera Interface (PCI), which also controls both the gantry and horizontal couch motion. Data in the form of different types of 6-byte packets are acquired in list mode. Tests with a signal generator show that the DAQ system should be able to cater for coincidence count-rates up to 100 kcps. The predicted count loss due to the DAQ system is ∼13% at this count rate, provided asynchronous-read based software is used. The list-mode data acquisition system designed for PETRRA could be adapted for other 3D PET cameras with similar data rates

  1. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  2. High performance CCD camera system for digitalisation of 2D DIGE gels.

    Science.gov (United States)

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Edge turbulence measurement in Heliotron J using a combination of hybrid probe system and fast cameras

    International Nuclear Information System (INIS)

    Nishino, N.; Zang, L.; Takeuchi, M.; Mizuuchi, T.; Ohshima, S.; Kasajima, K.; Sha, M.; Mukai, K.; Lee, H.Y.; Nagasaki, K.; Okada, H.; Minami, T.; Kobayashi, S.; Yamamoto, S.; Konoshima, S.; Nakamura, Y.; Sano, F.

    2013-01-01

    The hybrid probe system (a combination of Langmuir probes and magnetic probes), fast camera and gas puffing system were installed at the same toroidal section to study edge plasma turbulence/fluctuation in Heliotron J, especially blob (intermittent filament). Fast camera views the location of the probe head, so that the probe system yields the time evolution of the turbulence/fluctuation while the camera images the spatial profile. Gas puff at the same toroidal section was used to control the plasma density and simultaneous gas puff imaging technique. Using this combined system the filamentary structure associated with magnetic fluctuation was found in Heliotron J at the first time. The other kind of fluctuation was also observed at another experiment. This combination measurement enables us to distinguish MHD activity and electro-static activity

  4. Design of comprehensive general maintenance service system of aerial reconnaissance camera

    Directory of Open Access Journals (Sweden)

    Li Xu

    2016-01-01

    Full Text Available Aiming at the problem of lack of security equipment for airborne reconnaissance camera and universal difference between internal and external field and model, the design scheme of comprehensive universal system based on PC-104 bus architecture and ARM wireless test module is proposed is proposed using the ATE design. The scheme uses the "embedded" technology to design the system, which meets the requirements of the system. By using the technique of classified switching, the hardware resources are reasonably extended, and the general protection of the various types of aerial reconnaissance cameras is realized. Using the concept of “wireless test”, the test interface is extended to realize the comprehensive protection of the aerial reconnaissance camera and the field. The application proves that the security system works stably, has good generality, practicability, and has broad application prospect.

  5. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  6. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  7. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrance, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-01-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect the vacuum vessel internal structures in both the visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diam fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35-mm Nikon F3 still camera, or (5) a 16-mm Locam II movie camera with variable framing rate up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  8. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  9. Method used to test the imaging consistency of binocular camera's left-right optical system

    Science.gov (United States)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  10. Product Plan of New Generation System Camera "OLYMPUS PEN E-P1"

    Science.gov (United States)

    Ogawa, Haruo

    "OLYMPUS PEN E-P1", which is new generation system camera, is the first product of Olympus which is new standard "Micro Four-thirds System" for high-resolution mirror-less cameras. It continues good sales by the concept of "small and stylish design, easy operation and SLR image quality" since release on July 3, 2009. On the other hand, the half-size film camera "OLYMPUS PEN" was popular by the concept "small and stylish design and original mechanism" since the first product in 1959 and recorded sale number more than 17 million with 17 models. By the 50th anniversary topic and emotional value of the Olympus pen, Olympus pen E-P1 became big sales. I would like to explain the way of thinking of the product plan that included not only the simple functional value but also emotional value on planning the first product of "Micro Four-thirds System".

  11. Registration of an on-axis see-through head-mounted display and camera system

    Science.gov (United States)

    Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli

    2005-02-01

    An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.

  12. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  13. Proposal of secure camera-based radiation warning system for nuclear detection

    International Nuclear Information System (INIS)

    Tsuchiya, Ken'ichi; Kurosawa, Kenji; Akiba, Norimitsu; Kakuda, Hidetoshi; Imoto, Daisuke; Hirabayashi, Manato; Kuroki, Kenro

    2016-01-01

    Counter-terrorisms against radiological and nuclear threat are significant issues toward Tokyo 2020 Olympic and Paralympic Games. In terms of cost benefit, it is not easy to build a warning system for nuclear detection to prevent a Dirty Bomb attack (dispersion of radioactive materials using a conventional explosive) or a Silent Source attack (hidden radioactive materials) from occurring. We propose a nuclear detection system using the installed secure cameras. We describe a method to estimate radiation dose from noise pattern in CCD images caused by radiation. Some dosimeters under neutron and gamma-ray irradiations (0.1mSv-100mSv) were taken in CCD video camera. We confirmed amount of noise in CCD images increased in radiation exposure. The radiation detection using CMOS in secure cameras or cell phones has been implemented. However, in this presentation, we propose a warning system including neutron detection to search shielded nuclear materials or radiation exposure devices using criticality. (author)

  14. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2011-01-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a Fourier Transform Infrared (FTIR Spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10 arc s for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disk center even in case of variable intensity distributions across the source due to cirrus or haze.

  15. Calibration of a dual-PTZ camera system for stereo vision

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  16. An inexpensive compact automatic camera system for wildlife research

    Science.gov (United States)

    William R. Danielson; Richard M. DeGraaf; Todd K. Fuller

    1996-01-01

    This paper describes the design, conversion, and deployment of a reliable, compact, automatic multiple-exposure photographic system that was used to photograph nest predation events. This system may be the most versatile yet described in the literature because of its simplicity, portability, and dependability. The system was very reliable because it was designed around...

  17. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  18. Development of X-ray CCD camera system with high readout rate using ASIC

    International Nuclear Information System (INIS)

    Nakajima, Hiroshi; Matsuura, Daisuke; Anabuki, Naohisa; Miyata, Emi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Katayama, Haruyoshi

    2009-01-01

    We report on the development of an X-ray charge-coupled device (CCD) camera system with high readout rate using application-specific integrated circuit (ASIC) and Camera Link standard. The distinctive ΔΣ type analog-to-digital converter is introduced into the chip to achieve effective noise shaping and to obtain a high resolution with relatively simple circuits. The unit test proved moderately low equivalent input noise of 70μV with a high readout pixel rate of 625 kHz, while the entire chip consumes only 100 mW. The Camera Link standard was applied for the connectivity between the camera system and frame grabbers. In the initial test of the whole system, we adopted a P-channel CCD with a thick depletion layer developed for X-ray CCD camera onboard the next Japanese X-ray astronomical satellite. The characteristic X-rays from 109 Cd were successfully read out resulting in the energy resolution of 379(±7)eV (FWHM) at 22.1 keV, that is, ΔE/E=1.7% with a readout rate of 44 kHz.

  19. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    Science.gov (United States)

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  20. The hardware and software design for digital data acquisition system of γ-camera

    International Nuclear Information System (INIS)

    Zhang Chong; Jin Yongjie

    2006-01-01

    The digital data acquisition system is presented, which are used to update the traditional γ-cameras, including hardware and software. The system has many advantages such as small volume, various functions, high-quality image, low cost, extensible, and so on. (authors)

  1. Euratom multi-camera optical surveillance system (EMOSS) - a digital solution

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.G.; Taillade, B.; Pryck, C. de.

    1991-01-01

    In 1989 the Euratom Safeguards Directorate of the Commission of the European Communities drew up functional and draft technical specifications for a new fully digital multi-camera optical surveillance system. HYMATOM of Castries designed and built a prototype unit for laboratory and field tests. This paper reports and system design and first test results

  2. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  3. Integrated electric circuit CAD system in Minolta Camera Co. Ltd

    Energy Technology Data Exchange (ETDEWEB)

    Nakagami, Tsuyoshi; Hirata, Sumiaki; Matsumura, Fumihiko

    1988-08-26

    Development background, fundamental concept, details and future plan of the integrated electric circuit CAD system for OA equipment are presented. The central integrated database is basically intended to store experiences or know-hows, to cover the wide range of data required for designs, and to provide a friendly interface. This easy-to-use integrated database covers the drawing data, parts information, design standards, know-hows and system data. The system contains the circuit design function to support drawing circuit diagrams, the wiring design function to support the wiring and arrangement of printed circuit boards and various parts integratedly, and the function to verify designs, to make full use of parts or technical information, to maintain the system security. In the future, as the system will be wholly in operation, the design period reduction, quality improvement and cost saving will be attained by this integrated design system. (19 figs, 2 tabs)

  4. Development of the geoCamera, a System for Mapping Ice from a Ship

    Science.gov (United States)

    Arsenault, R.; Clemente-Colon, P.

    2012-12-01

    The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.

  5. A Distributed Wireless Camera System for the Management of Parking Spaces.

    Science.gov (United States)

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  6. A Distributed Wireless Camera System for the Management of Parking Spaces

    Directory of Open Access Journals (Sweden)

    Stanislav Vítek

    2017-12-01

    Full Text Available The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG feature descriptor and support vector machine (SVM classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  7. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  8. The camera of the fifth H.E.S.S. telescope. Part I: System description

    Energy Technology Data Exchange (ETDEWEB)

    Bolmont, J., E-mail: bolmont@in2p3.fr [LPNHE, Université Pierre et Marie Curie Paris 6, Université Denis Diderot Paris 7, CNRS/IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 5 (France); Corona, P.; Gauron, P.; Ghislain, P.; Goffin, C.; Guevara Riveros, L.; Huppert, J.-F.; Martineau-Huynh, O.; Nayman, P.; Parraud, J.-M.; Tavernet, J.-P.; Toussenel, F.; Vincent, D.; Vincent, P. [LPNHE, Université Pierre et Marie Curie Paris 6, Université Denis Diderot Paris 7, CNRS/IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 5 (France); Bertoli, W.; Espigat, P.; Punch, M. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, F-75205 Paris Cedex 13 (France); Besin, D.; Delagnes, E.; Glicenstein, J.-F. [CEA Saclay, DSM/IRFU, F-91191 Gif-Sur-Yvette Cedex (France); and others

    2014-10-11

    In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m{sup 2} reflector with a highly pixelized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.

  9. Development of Automated Tracking System with Active Cameras for Figure Skating

    Science.gov (United States)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  10. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    Science.gov (United States)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  11. System for whole body imaging and count profiling with a scintillation camera

    International Nuclear Information System (INIS)

    Kaplan, E.; Cooke, M.B.D.

    1976-01-01

    The present invention relates to a method of and apparatus for the radionuclide imaging of the whole body of a patient using an unmodified scintillation camera which permits a patient to be continuously moved under or over the stationary camera face along one axis at a time, parallel passes being made to increase the dimension of the other axis. The system includes a unique electrical circuit which makes it possible to digitally generate new matrix coordinates by summing the coordinates of a first fixed reference frame and the coordinates of a second moving reference frame. 19 claims, 7 figures

  12. Radiometric calibration of wide-field camera system with an application in astronomy

    Science.gov (United States)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  13. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  14. Gamma camera computer system quality control for conventional and tomographic use

    International Nuclear Information System (INIS)

    Laird, E.E.; Allan, W.; Williams, E.D.

    1983-01-01

    The proposition that some of the proposed measurements of gamma camera performance parameters for routine quality control are redundant and that only the uniformity requires daily monitoring was examined. To test this proposition, measurements of gamma camera performance were carried out under normal operating conditions and also with the introduction of faults (offset window, offset PM tube). Results for the uniform flood field are presented for non-uniformity, intrinsic spatial resolution, linearity and relative system sensitivity. The response to introduced faults revealed that while the non-uniformity response pattern of the gamma camera was clearly affected, both measurements and qualitative indications of the other performance parameters did not necessarily show any deterioration. (U.K.)

  15. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  16. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  17. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    NARCIS (Netherlands)

    Simi, M.; Tolou, N.; Valdastri, P.; Herder, J.L.; Menciassi, A.; Dario, P.

    2012-01-01

    A novel compliant Magnetic Levitation System (MLS) for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The

  18. Application of heterogeneous multiple camera system with panoramic capabilities in a harbor environment

    NARCIS (Netherlands)

    Schwering, P.B.W.; Lensen, H.A.; Broek, S.P. van den; Hollander, R.J.M. den; Mark, W. van der; Bouma, H.; Kemp, R.A.W.

    2009-01-01

    In a harbor environment threats like explosives-packed rubber boats, mine-carrying swimmers and divers must be detected in an early stage. This paper describes the integration and use of a heterogeneous multiple camera system with panoramic observation capabilities for detecting these small vessels

  19. Using Surveillance Camera Systems to Monitor Public Domains: Can Abuse Be Prevented

    Science.gov (United States)

    2006-03-01

    relationship with a 16-year old girl failed. The incident was captured by a New York City Police Department surveillance camera. Although the image...administrators stated that the images recorded were “…nothing more than images of a few bras and panties .”17 The use of CCTV surveillance systems for

  20. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  1. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  2. Development of a hardware-based registration system for the multimodal medical images by USB cameras

    International Nuclear Information System (INIS)

    Iwata, Michiaki; Minato, Kotaro; Watabe, Hiroshi; Koshino, Kazuhiro; Yamamoto, Akihide; Iida, Hidehiro

    2009-01-01

    There are several medical imaging scanners and each modality has different aspect for visualizing inside of human body. By combining these images, diagnostic accuracy could be improved, and therefore, several attempts for multimodal image registration have been implemented. One popular approach is to use hybrid image scanners such as positron emission tomography (PET)/CT and single photon emission computed tomography (SPECT)/CT. However, these hybrid scanners are expensive and not fully available. We developed multimodal image registration system with universal serial bus (USB) cameras, which is inexpensive and applicable to any combinations of existed conventional imaging scanners. The multiple USB cameras will determine the three dimensional positions of a patient while scanning. Using information of these positions and rigid body transformation, the acquired image is registered to the common coordinate which is shared with another scanner. For each scanner, reference marker is attached on gantry of the scanner. For observing the reference marker's position by the USB cameras, the location of the USB cameras can be arbitrary. In order to validate the system, we scanned a cardiac phantom with different positions by PET and MRI scanners. Using this system, images from PET and MRI were visually aligned, and good correlations between PET and MRI images were obtained after the registration. The results suggest this system can be inexpensively used for multimodal image registrations. (author)

  3. Characteristics of a single photon emission tomography system with a wide field gamma camera

    International Nuclear Information System (INIS)

    Mathonnat, F.; Soussaline, F.; Todd-Pokropek, A.E.; Kellershohn, C.

    1979-01-01

    This text summarizes a work study describing the imagery possibilities of a single photon emission tomography system composed of a conventional wide field gamma camera, connected to a computer. The encouraging results achieved on the various phantoms studied suggest a significant development of this technique in clinical work in Nuclear Medicine Departments [fr

  4. Motion Sickness When Driving With a Head-Slaved Camera System

    Science.gov (United States)

    2003-02-01

    YPR-765 under armour (Report TM-97-A026). Soesterberg, The Netherlands: TNO Human Factors Research Institute. Van Erp, J.B.F., Padmos, P. & Tenkink, E...Institute. Van Erp, J.B.F., Van den Dobbelsteen, J.J. & Padmos, P. (1998). Improved camera-monitor system for driving YPR-765 under armour (Report TM-98

  5. TENTACLE Multi-Camera Immersive Surveillance System Phase 2

    Science.gov (United States)

    2015-04-16

    variants available. For a video with 640x480 frame resolution the CPU CIEL*a*b* performs at 11 FPS (Frames Per Second) on a quad core Xeon processor ...running at 2.67 GHz. For the same video the GPU variant of CIEL*a*b* background model processing speed is 44 FPS on the same processor with an Nvidia...The model is also sometimes called HMAX (Hierarchical Model and X). The BIM tries to build a system that emulates object recognition in human cortex

  6. The development of a reliable multi-camera multiplexed CCTV system for safeguards surveillance

    International Nuclear Information System (INIS)

    Gilbert, R.S.; Chiang, K.S.

    1986-01-01

    The background, requirements and system details for a simple reliable Closed Circuit Television (CCTV) system are described. The design of the system presented allows up to 8 CCTV cameras of different makes to be multiplexed and their out-put recorded by three Time Lapse Recorders (TLRs) operating in parallel. This multiplex or MUX-CCTV system is intended to be used by the IAEA for surveillance at several nuclear facilities. The system is unique in that it allows all of the cameras to be operated asynchronously and it provides high quality video during replay. It also incorporates video event counting logic which enables IAEA inspectors to take a very quick inventory of the events which are recorded during unattended operation. This paper discusses other phases of the development for the system and it presents some speculation about future changes which may enhance performance

  7. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    OpenAIRE

    Thuy Tuong Nguyen; David C. Slaughter; Bradley D. Hanson; Andrew Barber; Amy Freitas; Daniel Robles; Erin Whelan

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a t...

  8. Motionless active depth from defocus system using smart optics for camera autofocus applications

    Science.gov (United States)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  9. A System for Acquisition, Processing and Visualization of Image Time Series from Multiple Camera Networks

    Directory of Open Access Journals (Sweden)

    Cemal Melih Tanis

    2018-06-01

    Full Text Available A system for multiple camera networks is proposed for continuous monitoring of ecosystems by processing image time series. The system is built around the Finnish Meteorological Image PROcessing Toolbox (FMIPROT, which includes data acquisition, processing and visualization from multiple camera networks. The toolbox has a user-friendly graphical user interface (GUI for which only minimal computer knowledge and skills are required to use it. Images from camera networks are acquired and handled automatically according to the common communication protocols, e.g., File Transfer Protocol (FTP. Processing features include GUI based selection of the region of interest (ROI, automatic analysis chain, extraction of ROI based indices such as the green fraction index (GF, red fraction index (RF, blue fraction index (BF, green-red vegetation index (GRVI, and green excess (GEI index, as well as a custom index defined by a user-provided mathematical formula. Analysis results are visualized on interactive plots both on the GUI and hypertext markup language (HTML reports. The users can implement their own developed algorithms to extract information from digital image series for any purpose. The toolbox can also be run in non-GUI mode, which allows running series of analyses in servers unattended and scheduled. The system is demonstrated using an environmental camera network in Finland.

  10. A universal multiprocessor system for the fast acquisition and processing of positron camera data

    International Nuclear Information System (INIS)

    Deluigi, B.

    1982-01-01

    In this study the main components of a suitable detection system were worked out, and their properties were examined. For the measurement of the three-dimensional distribution of radiopharmaka marked by positron emitters in animal-experimental studies first a positron camera was constructed. For the detection of the annihilation quanta serve two opposite lying position-sensitive gamma detectors which are derived in coincidence. Two commercial camera heads working according to the Anger principle were reconstructed for these purposes and switched together by a special interface to the positron camera. By this arrangement a spatial resolution of 0.8 cm FWHM for a line source in the symmetry plane and a coincidence resolution time 2T of 16ns FW0.1M was reached. For the three-dimensional image reconstruction with the data of a positron camera a maximum-likelihood procedure was developed and tested by a Monte Carlo procedure. In view of this application an at most flexible multi-microprocessor system was developed. A high computing capacity is reached owing to the fact that several partial problems are distributed to different processors and are processed parallely. The architecture was so scheduled that the system possesses a high error tolerance and that the computing capacity can be extended without a principal limit. (orig./HSI) [de

  11. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  12. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  13. UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER

    Directory of Open Access Journals (Sweden)

    M. Hillemann

    2017-08-01

    Full Text Available Unmanned Aerial Vehicle (UAV with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30 mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8° and 6.4 mm.

  14. An ebCMOS camera system for marine bioluminescence observation: The LuSEApher prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dominjon, A., E-mail: a.dominjon@ipnl.in2p3.fr [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Ageron, M. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Billault, M.; Brunner, J. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Calabria, P. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Chabanat, E. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Chaize, D.; Doan, Q.T.; Guerin, C.; Houles, J.; Vagneron, L. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France)

    2012-12-11

    The ebCMOS camera, called LuSEApher, is a marine bioluminescence recorder device adapted to extreme low light level. This prototype is based on the skeleton of the LUSIPHER camera system originally developed for fluorescence imaging. It has been installed at 2500 m depth off the Mediterranean shore on the site of the ANTARES neutrino telescope. The LuSEApher camera is mounted on the Instrumented Interface Module connected to the ANTARES network for environmental science purposes (European Seas Observatory Network). The LuSEApher is a self-triggered photo detection system with photon counting ability. The presentation of the device is given and its performances such as the single photon reconstruction, noise performances and trigger strategy are presented. The first recorded movies of bioluminescence are analyzed. To our knowledge, those types of events have never been obtained with such a sensitivity and such a frame rate. We believe that this camera concept could open a new window on bioluminescence studies in the deep sea.

  15. Gamma camera with an original system of scintigraphic image printing incorporated

    International Nuclear Information System (INIS)

    Roux, G.

    A new gamma camera has been developed, using Anger's Principle to localise the scintillations and incorporating the latest improvements which give a standard of efficiency at present competitive for this kind of apparatus. In the general design of the system special care was devoted to its ease of employment and above all to the production of high-quality scintigraphic images, the recording of images obtained from the gamma camera posing a problem to which a solution is proposed. This consists in storing all the constituent data of an image in a cell matrix of format similar to the scope of the object, the superficial information density of the image being represented by the cell contents. When the examination is finished a special printer supplies a 35x43 cm 2 document in colour on paper, or in black and white on radiological film, at 2:1 or 1:1 magnifications. The laws of contrast representation by the colours or shades of grey are chosen a posteriori according to the organ examined. Documents of the same quality as those so far supplied by a rectilinear scintigraph are then obtained with the gamma camera, which offers its own advantages in addition. The first images acquired in vivo with the whole system, gamma camera plus printer, are presented [fr

  16. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  17. Low power multi-camera system and algorithms for automated threat detection

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  18. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    Science.gov (United States)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-François; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-07-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  19. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    Science.gov (United States)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  20. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    Science.gov (United States)

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  1. A quality assurance (QA) system with a web camera for high-dose-rate brachytherapy

    International Nuclear Information System (INIS)

    Hirose, Asako; Ueda, Yoshihiro; Ohira, Shingo

    2016-01-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an 192 Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.3±0.1 mm and that of dwell time errors 0.1 ± 0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size. (author)

  2. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    Science.gov (United States)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  3. Multi-dimensional diagnostics of high power ion beams by Arrayed Pinhole Camera System

    International Nuclear Information System (INIS)

    Yasuike, K.; Miyamoto, S.; Shirai, N.; Akiba, T.; Nakai, S.; Imasaki, K.; Yamanaka, C.

    1993-01-01

    The authors developed multi-dimensional beam diagnostics system (with spatially and time resolution). They used newly developed Arrayed Pinhole Camera (APC) for this diagnosis. The APC can get spatial distribution of divergence and flux density. They use two types of particle detectors in this study. The one is CR-39 can get time integrated images. The other one is gated Micro-Channel-Plate (MCP) with CCD camera. It enables time resolving diagnostics. The diagnostics systems have resolution better than 10mrad divergence, 0.5mm spatial resolution on the objects respectively. The time resolving system has 10ns time resolution. The experiments are performed on Reiden-IV and Reiden-SHVS induction linac. The authors get time integrated divergence distributions on Reiden-IV proton beam. They also get time resolved image on Reiden-SHVS

  4. OBLIQUE MULTI-CAMERA SYSTEMS – ORIENTATION AND DENSE MATCHING ISSUES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2014-03-01

    Full Text Available The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.. The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  5. A data acquisition system for coincidence imaging using a conventional dual head gamma camera

    Science.gov (United States)

    Lewellen, T. K.; Miyaoka, R. S.; Jansen, F.; Kaplan, M. S.

    1997-06-01

    A low cost data acquisition system (DAS) was developed to acquire coincidence data from an unmodified General Electric Maxxus dual head scintillation camera. A high impedance pick-off circuit provides position and energy signals to the DAS without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Coincidence is determined with fast timing signals derived from constant fraction discriminators. A charge-integrating FERA 16 channel ADC feeds position and energy data to two CAMAC FERA memories operated as ping-pong buffers. A Macintosh PowerPC running Labview controls the system and reads the CAMAC memories. A CAMAC 12-channel scaler records singles and coincidence rate data. The system dead-time is approximately 10% at a coincidence rate of 4.0 kHz.

  6. A data acquisition system for coincidence imaging using a conventional dual head gamma camera

    International Nuclear Information System (INIS)

    Lewellen, T.K.; Miyaoka, R.S.; Kaplan, M.S.

    1996-01-01

    A low cost data acquisition system (DAS) was developed to acquire coincidence data from an unmodified General Electric Maxxus dual head scintillation camera. A high impedance pick-off circuit provides position and energy signals to the DAS without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Coincidence is determined with fast timing signals derived from constant fraction discriminators. A charge-integrating FERA 16 channel ADC feeds position and energy data to two CAMAC FERA memories operated as ping-pong buffers. A Macintosh PowerPC running Labview controls the system and reads the CAMAC memories. A CAMAC 12-channel scaler records singles and coincidence rate data. The system dead-time is approximately 10% at a coincidence rate of 4.0 kHz

  7. Multispectral calibration to enhance the metrology performance of C-mount camera systems

    Directory of Open Access Journals (Sweden)

    S. Robson

    2014-06-01

    Full Text Available Low cost monochrome camera systems based on CMOS sensors and C-mount lenses have been successfully applied to a wide variety of metrology tasks. For high accuracy work such cameras are typically equipped with ring lights to image retro-reflective targets as high contrast image features. Whilst algorithms for target image measurement and lens modelling are highly advanced, including separate RGB channel lens distortion correction, target image circularity compensation and a wide variety of detection and centroiding approaches, less effort has been directed towards optimising physical target image quality by considering optical performance in narrow wavelength bands. This paper describes an initial investigation to assess the effect of wavelength on camera calibration parameters for two different camera bodies and the same ‘C-mount’ wide angle lens. Results demonstrate the expected strong influence on principal distance, radial and tangential distortion, and also highlight possible trends in principal point, orthogonality and affinity parameters which are close to the parameter estimation noise level from the strong convergent self-calibrating image networks.

  8. A directional fast neutron detector using scintillating fibers and an intensified CCD camera system

    International Nuclear Information System (INIS)

    Holslin, Daniel; Armstrong, A.W.; Hagan, William; Shreve, David; Smith, Scott

    1994-01-01

    We have been developing and testing a scintillating fiber detector (SFD) for use as a fast neutron sensor which can discriminate against neutrons entering at angles non-parallel to the fiber axis (''directionality''). The detector/convertor component is a fiber bundle constructed of plastic scintillating fibers each measuring 10 cm long and either 0.3 mm or 0.5 mm in diameter. Extensive Monte Carlo simulations were made to optimize the bundle response to a range of fast neutron energies and to intense fluxes of high energy gamma-rays. The bundle is coupled to a set of gamma-ray insenitive electro-optic intensifiers whose output is viewed by a CCD camera directly coupled to the intensifiers. Two types of CCD cameras were utilized: 1) a standard, interline RS-170 camera with electronic shuttering and 2) a high-speed (up to 850 frame/s) field-transfer camera. Measurements of the neutron detection efficiency and directionality were made using 14 MeV neutrons, and the response to gamma-rays was performed using intense fluxes from radioisotopic sources (up to 20 R/h). Recently, the detector was constructed and tested using a large 10 cm by 10 cm square fiber bundle coupled to a 10 cm diameter GEN I intensifier tube. We present a description of the various detector systems and report the results of experimental tests. ((orig.))

  9. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  10. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  11. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Tian, Jinshou; Liu, Zhen; Fang, Yuman; Gao, Guilong; Liang, Lingliang; Wen, Wenlong

    2015-01-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility

  12. Development of low-cost high-performance multispectral camera system at Banpil

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  13. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  14. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Sensing system with USB camera for experiments of polarization of the light

    Directory of Open Access Journals (Sweden)

    José Luís Fabris

    2017-08-01

    Full Text Available This work shows a sensor system for educational experiments, composed of a USB camera and a software developed and provided by the authors. The sensor system is suitable for the purpose of studying phenomena related to the polarization of the light. The system was tested in experiments performed to verify the Malus’ Law and the spectral efficiency of polarizers. Details of the experimental setup are shown. The camera captures the light in the visible spectral range from a LED that illuminates a white screen after passing through two polarizers. The software uses the image captured by the camera to provide the relative intensity of the light. With the use of two rotating H-sheet linear polarizers, a linear fitting of the Malus’s Law to the transmitted light intensity data resulted in correlation coefficients R larger than 0.9988. The efficiency of the polarizers in different visible spectral regions was verified with the aid of color filters added to the experimental setup. The system was also used to evaluate the intensity time stability of a white LED.

  16. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    Science.gov (United States)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  17. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  18. Deflection system of a high-speed streak camera in the form of a delay line

    International Nuclear Information System (INIS)

    Korzhenevich, I.M.; Fel'dman, G.G.

    1993-01-01

    This paper presents an analysis of the operation of a meander deflection system, well-known in oscillography, when it is used to scan the image in a streak-camera tube. Effects that are specific to high-speed photography are considered. It is shown that such a deflection system imposes reduced requirements both on the steepness and on the duration of the linear leading edges of the pulses of the spark gaps that generate the sweep voltage. An example of the design of a meander deflection system whose sensitivity is a factor of two higher than for a conventional system is considered. 5 refs., 3 figs

  19. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    Science.gov (United States)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  20. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    Science.gov (United States)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  1. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    Science.gov (United States)

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  2. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  3. The electronics system for the LBNL positron emission mammography (PEM) camera

    CERN Document Server

    Moses, W W; Baker, K; Jones, W; Lenox, M; Ho, M H; Weng, M

    2001-01-01

    Describes the electronics for a high-performance positron emission mammography (PEM) camera. It is based on the electronics for a human brain positron emission tomography (PET) camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An application-specified integrated circuit (ASIC) services the photodetector (PD) array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field-programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal-by-crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs make the overall design extremely flexible, all...

  4. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System.

    Science.gov (United States)

    Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Rodríguez-Esparragón, Dionisio; Perez-Jimenez, Rafael

    2017-07-04

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios.

  5. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  6. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System

    Science.gov (United States)

    Chavez-Burbano, Patricia; Rabadan, Jose; Perez-Jimenez, Rafael

    2017-01-01

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios. PMID:28677613

  7. Calibration method for projector-camera-based telecentric fringe projection profilometry system.

    Science.gov (United States)

    Liu, Haibo; Lin, Huijing; Yao, Linshen

    2017-12-11

    By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

  8. Vibration control of a camera mount system for an unmanned aerial vehicle using piezostack actuators

    International Nuclear Information System (INIS)

    Oh, Jong-Seok; Choi, Seung-Bok; Han, Young-Min

    2011-01-01

    This work proposes an active mount for the camera systems of unmanned aerial vehicles (UAV) in order to control unwanted vibrations. An active actuator of the proposed mount is devised as an inertial type, in which a piezostack actuator is directly connected to the inertial mass. After evaluating the actuating force of the actuator, it is combined with the rubber element of the mount, whose natural frequency is determined based on the measured vibration characteristics of UAV. Based on the governing equations of motion of the active camera mount, a robust sliding mode controller (SMC) is then formulated with consideration of parameter uncertainties and hysteresis behavior of the actuator. Subsequently, vibration control performances of the proposed active mount are experimentally evaluated in the time and frequency domains. In addition, a full camera mount system of UAVs that is supported by four active mounts is considered and its vibration control performance is evaluated in the frequency domain using a hardware-in-the-loop simulation (HILS) method

  9. Super-resolution processing for pulsed neutron imaging system using a high-speed camera

    International Nuclear Information System (INIS)

    Ishizuka, Ken; Kai, Tetsuya; Shinohara, Takenao; Segawa, Mariko; Mochiki, Koichi

    2015-01-01

    Super-resolution and center-of-gravity processing improve the resolution of neutron-transmitted images. These processing methods calculate the center-of-gravity pixel or sub-pixel of the neutron point converted into light by a scintillator. The conventional neutron-transmitted image is acquired using a high-speed camera by integrating many frames when a transmitted image with one frame is not provided. It succeeds in acquiring the transmitted image and calculating a spectrum by integrating frames of the same energy. However, because a high frame rate is required for neutron resonance absorption imaging, the number of pixels of the transmitted image decreases, and the resolution decreases to the limit of the camera performance. Therefore, we attempt to improve the resolution by integrating the frames after applying super-resolution or center-of-gravity processing. The processed results indicate that center-of-gravity processing can be effective in pulsed-neutron imaging with a high-speed camera. In addition, the results show that super-resolution processing is effective indirectly. A project to develop a real-time image data processing system has begun, and this system will be used at J-PARC in JAEA. (author)

  10. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  11. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  12. A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

    Directory of Open Access Journals (Sweden)

    M. Hassanein

    2016-06-01

    Full Text Available In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated

  13. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  14. Study and Monitoring of Itinerant Tourism along the Francigena Route, by Camera Trapping System

    Directory of Open Access Journals (Sweden)

    Gianluca Bambi

    2017-01-01

    Full Text Available Tourism along the Via Francigena is a growing phenomenon. It is important to develop a direct survey of path’s users (pilgrims, tourists travel, day-trippers, etc. able to define user’s profiles, phenomenon extent, and its evolution over time. This in order to develop possible actions to promote the socio-economic impact on rural areas concerned. With this research, we propose the creation of a monitoring network based on camera trapping system to estimate the number of tourists in a simple and expeditious way. Recently, the camera trapping, as well as the faunal field, is finding wide use even in population surveys. An innovative application field is the one in the tourist sector, becoming the basis of statistical and planning analysis. To carry out a survey of the pilgrims/tourists, we applied this type of sampling method. It is an interesting method since it allows to obtain data about type and number of users. The application of camera trapping along the Francigena allows to obtain several information about users profiles, such as sex, age, average lengths of pilgrimages, type of journey (by foot, by horseback or by bike, in a continuous time period distributed in the tourist months of the 2014.

  15. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    Directory of Open Access Journals (Sweden)

    Idowu Ayoola

    2015-09-01

    Full Text Available A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.

  16. Development of Camera Electronics for the Advanced Gamma-ray Imaging System (AGIS)

    Science.gov (United States)

    Tajima, Hiroyasu

    2009-05-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. We have developed test systems for some of these concepts and are testing their performance. Here we present test results of the test systems.

  17. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  18. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging.

    Science.gov (United States)

    Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J

    2015-02-01

    To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The

  19. Development of the monitoring system of plasma behavior using a CCD camera in the GAMMA 10 tandem mirror

    International Nuclear Information System (INIS)

    Kawano, Hirokazu; Nakashima, Yousuke; Higashizono, Yuta

    2007-01-01

    In the central-cell of the GAMMA 10 tandem mirror, a medium-speed camera (CCD camera, 400 frames per second, 216 x 640 pixel) has been installed for the observation of plasma behavior. This camera system is designed for monitoring the plasma position and movement in the whole discharge duration. The captured two-dimensional (2-D) images are automatically displayed just after the plasma shot and stored sequentially shot by shot. This system has been established as a helpful tool for optimizing the plasma production and heating systems by measuring the plasma behavior in several experimental conditions. The camera system shows that the intensity of the visible light emission on the central-cell limiter accompanied by central electron cyclotron heating (C-ECH) correlate with the wall conditioning and immersion length of a movable limiter (iris limiter) in the central cell. (author)

  20. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y., E-mail: mejia_famerp@yahoo.com.b [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Biologia Molecular; Castro, A.A. de; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica; Leite, J.P. [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Fac. de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Braga, J. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Astrofisica

    2010-11-15

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology. (author)

  1. A positioning system for forest diseases and pests based on GIS and PTZ camera

    International Nuclear Information System (INIS)

    Wang, Z B; Zhao, F F; Wang, C B; Wang, L L

    2014-01-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments

  2. A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests

    Directory of Open Access Journals (Sweden)

    Yousok Kim

    2013-09-01

    Full Text Available Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS. The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape of the test specimen using system identification methods (frequency domain decomposition, FDD. By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape with the 3D measurements.

  3. Proposed patient motion monitoring system using feature point tracking with a web camera.

    Science.gov (United States)

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  4. IEEE 1394 CAMERA IMAGING SYSTEM FOR BROOKHAVENS BOOSTER APPLICATION FACILITY BEAM DIAGNOSTICS

    International Nuclear Information System (INIS)

    BROWN, K.A.; FRAK, B.; GASSNER, D.; HOFF, L.; OLSEN, R.H.; SATOGATA, T.; TEPIKIAN, S.

    2002-01-01

    Brookhaven's Booster Applications Facility (BAF) will deliver resonant extracted heavy ion beams from the AGS Booster to short-exposure fixed-target experiments located at the end of the BAF beam line. The facility is designed to deliver a wide range of heavy ion species over a range of intensities from 10 3 to over 10 8 ions/pulse, and over a range of energies from 0.1 to 3.0 GeV/nucleon. With these constraints we have designed instrumentation packages which can deliver the maximum amount of dynamic range at a reasonable cost. Through the use of high quality optics systems and neutral density light filters we will achieve 4 to 5 orders of magnitude in light collection. By using digital IEEE1394 camera systems we are able to eliminate the frame-grabber stage in processing and directly transfer data at maximum rates of 400 Mb/set. In this note we give a detailed description of the system design and discuss the parameters used to develop the system specifications. We will also discuss the IEEE1394 camera software interface and the high-level user interface

  5. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    Science.gov (United States)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  6. A Versatile Time-Lapse Camera System Developed by the Hawaiian Volcano Observatory for Use at Kilauea Volcano, Hawaii

    Science.gov (United States)

    Orr, Tim R.; Hoblitt, Richard P.

    2008-01-01

    Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.

  7. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    Science.gov (United States)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  8. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  9. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    Science.gov (United States)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  10. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  11. JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.

    Science.gov (United States)

    Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun

    2017-03-01

    Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.

  12. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  13. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  14. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  15. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  16. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    Science.gov (United States)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  17. A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software

    Directory of Open Access Journals (Sweden)

    S. H. Oh

    2007-12-01

    Full Text Available We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512, KAF-1602E (15367times;1024, KAF-3200E (2184×1472 made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

  18. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    Science.gov (United States)

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  19. Theoretical considerations on the possibility of using a television camera in scintigraphy

    International Nuclear Information System (INIS)

    Banget Mossaz, Gaston; Cezilly, Daniel; Paccard, Michel

    1969-04-01

    After a presentation of the principles of scintigraphy for the exploration of human organs, of the three main parts of scintigraphic apparels (collimator, scintillator and detector) and of their characteristics (resolving power, sensitivity, contrast), this paper describes the properties of gamma radiations interacting with matter, their absorption and their detection, some statistical notions about gamma radiations (Poisson law), the properties of the collimator, of the scintillator (sodium iodide) and of the detector. The use of a television camera is then introduced with issues concerning the limitations of a camera tube, the electronic optics of the tube, camera tubes with brightness amplification, the case of a Vidicon tube, etc. and some considerations on the potential benefits of television cameras on resolution, contrast and sensitivity

  20. The Advanced Gamma-ray Imaging System (AGIS): Camera Electronics Designs

    Science.gov (United States)

    Tajima, H.; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Holder, J.; Horan, D.; Krawczynski, H.; Ong, R.; Swordy, S.; Wagner, R.; Williams, D.

    2008-04-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. These design concepts include systems based on multi-channel waveform sampling ASIC optimized for AGIS, a system based on IIT (image intensifier tube) for large channel (order of 1 million channels) readout as well as a multiplexed FADC system based on the current VERITAS readout design. Here we present trade-off in the studies of these design concepts.

  1. Handbook of camera monitor systems the automotive mirror-replacement technology based on ISO 16505

    CERN Document Server

    2016-01-01

    This handbook offers a comprehensive overview of Camera Monitor Systems (CMS), ranging from the ISO 16505-based development aspects to practical realization concepts. It offers readers a wide-ranging discussion of the science and technology of CMS as well as the human-interface factors of such systems. In addition, it serves as a single reference source with contributions from leading international CMS professionals and academic researchers. In combination with the latest version of UN Regulation No. 46, the normative framework of ISO 16505 permits CMS to replace mandatory rearview mirrors in series production vehicles. The handbook includes scientific and technical background information to further readers’ understanding of both of these regulatory and normative texts. It is a key reference in the field of automotive CMS for system designers, members of standardization and regulation committees, engineers, students and researchers.

  2. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  3. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    Energy Technology Data Exchange (ETDEWEB)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji [Department of Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan)

    2016-04-15

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.

  4. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    International Nuclear Information System (INIS)

    Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji

    2016-01-01

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.

  5. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  6. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  7. Automated Degradation Diagnosis in Character Recognition System Subject to Camera Vibration

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2014-01-01

    Full Text Available Degradation diagnosis plays an important role for degraded character processing, which can tell the recognition difficulty of a given degraded character. In this paper, we present a framework for automated degraded character recognition system by statistical syntactic approach using 3D primitive symbol, which is integrated by degradation diagnosis to provide accurate and reliable recognition results. Our contribution is to design the framework to build the character recognition submodels corresponding to degradation subject to camera vibration or out of focus. In each character recognition submodel, statistical syntactic approach using 3D primitive symbol is proposed to improve degraded character recognition performance. In the experiments, we show attractive experimental results, highlighting the system efficiency and recognition performance by statistical syntactic approach using 3D primitive symbol on the degraded character dataset.

  8. Design and implementation of a dual-wavelength intrinsic fluorescence camera system

    Science.gov (United States)

    Ortega-Martinez, Antonio; Musacchia, Joseph J.; Gutierrez-Herrera, Enoch; Wang, Ying; Franco, Walfre

    2017-03-01

    Intrinsic UV fluorescence imaging is a technique that permits the observation of spatial differences in emitted fluorescence. It relies on the fluorescence produced by the innate fluorophores in the sample, and thus can be used for marker-less in-vivo assessment of tissue. It has been studied as a tool for the study of the skin, specifically for the classification of lesions, the delimitation of lesion borders and the study of wound healing, among others. In its most basic setup, a sample is excited with a narrow-band UV light source and the resulting fluorescence is imaged with a UV sensitive camera filtered to the emission wavelength of interest. By carefully selecting the excitation/emission pair, we can observe changes in fluorescence associated with physiological processes. One of the main drawbacks of this simple setup is the inability to observe more than a single excitation/emission pair at the same time, as some phenomena are better studied when two or more different pairs are studied simultaneously. In this work, we describe the design and the hardware and software implementation of a dual wavelength portable UV fluorescence imaging system. Its main components are an UV camera, a dual wavelength UV LED illuminator (295 and 345 nm) and two different emission filters (345 and 390 nm) that can be swapped by a mechanical filter wheel. The system is operated using a laptop computer and custom software that performs basic pre-processing to improve the image. The system was designed to allow us to image fluorescent peaks of tryptophan and collagen cross links in order to study wound healing progression.

  9. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    Directory of Open Access Journals (Sweden)

    Hotaka Takizawa

    2017-02-01

    Full Text Available The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  10. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    International Nuclear Information System (INIS)

    Cho, Jai Wan; Jeong, Kyung Min

    2012-01-01

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  11. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  12. Full-parallax 3D display from stereo-hybrid 3D camera system

    Science.gov (United States)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  13. Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer

    Science.gov (United States)

    Uthoff, Ross

    Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.

  14. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    Science.gov (United States)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  15. System Configuration and Operation Plan of Hayabusa2 DCAM3-D Camera System for Scientific Observation During SCI Impact Experiment

    Science.gov (United States)

    Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime

    2017-07-01

    An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.

  16. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    Science.gov (United States)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  17. Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Directory of Open Access Journals (Sweden)

    Alexander Wendel

    2017-10-01

    Full Text Available Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera’s 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera’s pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m/1.05 ∘ and 0.18 m/2.39 ∘ . We also propose several approaches to displaying and interpreting the 6D results in a human readable way.

  18. Modernizing and Upgrading the Astrogeodetic Camera System for Determining Vertical Deflections

    Science.gov (United States)

    Albayrak, M.; Halicioglu, K.; Basoglu, B.; Ulug, R.; Ozludemir, M. T.; Deniz, R.

    2017-12-01

    The geoid is the equipotential surface of the Earth. Modeling the geoid with high accuracy is one of the critical issues in Geodesy. Development of geoid modeling is based on geodetic, gravimetric and astrogeodetic techniques. In order to reach the intended accuracy, deflection of the vertical (VD) components, which is obtained by the astrogeodetic techniques, can be used. VD is also very important for providing valuable information on the structure of Earth's gravity field. For this reason, astrogeodetic observations were essential gravity field observables and used for astrogeodetic geoid determinations. Scientists in several countries have developed modern instruments to measure vertical deflections. One of those instruments, namely Astro-geodetic Camera System (ACSYS) was developed in Turkey in 2015. The system components include a telescope, a Charged Coupled Device (CCD), two tiltmeters with the accuracy of 0.01 milliradians, a focuser, a single frequency GPS receiver and a substructure. The first version of the ACSYS is capable of determining astronomical coordinates with an accuracy of 0.2-0.3 arc sec, yet it has some limitations in observation duration. Because of the semi-automated mechanical design, leveling the system towards zenith was a time-consuming process. Since the beginning of 2016, the ACSYS has been modernized through the upgrade of the system with new technological components, hardware and software which is supported by The Scientific and Research Council of Turkey. Upgrade process includes the installation of a high-resolution tiltmeter with an accuracy of 1 nano-radians, implementation of a temperature compensating focuser and fully automatized substructure system. The components of the system are controlled by specially designed and integrated software. In the scope of the modernization studies, the project team has also been working on a unified real-time processing and control software for the ACSYS.v2. This study introduces the

  19. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    Science.gov (United States)

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  20. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  1. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-04-01

    Full Text Available Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD method for an iris recognition system (iPAD using a near infrared light (NIR camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED. Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM. Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  2. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    Science.gov (United States)

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  3. Camera system considerations for geomorphic applications of SfM photogrammetry

    Science.gov (United States)

    Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John

    2017-01-01

    The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights

  4. Monitoring system for isolated limb perfusion based on a portable gamma camera

    International Nuclear Information System (INIS)

    Orero, A.; Muxi, A.; Rubi, S.; Duch, J.; Vidal-Sicart, S.; Pons, F.; Roe, N.; Rull, R.; Pavon, N.; Pavia, J.

    2009-01-01

    Background: The treatment of malignant melanoma or sarcomas on a limb using extremity perfusion with tumour necrosis factor (TNF-α) and melphalan can result in a high degree of systemic toxicity if there is any leakage from the isolated blood territory of the limb into the systemic vascular territory. Leakage is currently controlled by using radiotracers and heavy external probes in a procedure that requires continuous manual calculations. The aim of this work was to develop a light, easily transportable system to monitor limb perfusion leakage by controlling systemic blood pool radioactivity with a portable gamma camera adapted for intraoperative use as an external probe, and to initiate its application in the treatment of MM patients. Methods: A special collimator was built for maximal sensitivity. Software for acquisition and data processing in real time was developed. After testing the adequacy of the system, it was used to monitor limb perfusion leakage in 16 patients with malignant melanoma to be treated with perfusion of TNF-α and melphalan. Results: The field of view of the detector system was 13.8 cm, which is appropriate for the monitoring, since the area to be controlled was the precordial zone. The sensitivity of the system was 257 cps/MBq. When the percentage of leakage reaches 10% the associated absolute error is ±1%. After a mean follow-up period of 12 months, no patients have shown any significant or lasting side-effects. Partial or complete remission of lesions was seen in 9 out of 16 patients (56%) after HILP with TNF-α and melphalan. Conclusion: The detector system together with specially developed software provides a suitable automatic continuous monitoring system of any leakage that may occur during limb perfusion. This technique has been successfully implemented in patients for whom perfusion with TNF-α and melphalan has been indicated. (orig.)

  5. Monitoring system for isolated limb perfusion based on a portable gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Orero, A.; Muxi, A.; Rubi, S.; Duch, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Vidal-Sicart, S.; Pons, F. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); Red Tematica de Investigacion Cooperativa en Cancer (RTICC), Barcelona (Spain); Roe, N. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); Rull, R. [Servei de Cirurgia, Hospital Clinic, Barcelona (Spain); Pavon, N. [Inst. de Fisica Corpuscular, CSIC - UV, Valencia (Spain); Pavia, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain)

    2009-07-01

    Background: The treatment of malignant melanoma or sarcomas on a limb using extremity perfusion with tumour necrosis factor (TNF-{alpha}) and melphalan can result in a high degree of systemic toxicity if there is any leakage from the isolated blood territory of the limb into the systemic vascular territory. Leakage is currently controlled by using radiotracers and heavy external probes in a procedure that requires continuous manual calculations. The aim of this work was to develop a light, easily transportable system to monitor limb perfusion leakage by controlling systemic blood pool radioactivity with a portable gamma camera adapted for intraoperative use as an external probe, and to initiate its application in the treatment of MM patients. Methods: A special collimator was built for maximal sensitivity. Software for acquisition and data processing in real time was developed. After testing the adequacy of the system, it was used to monitor limb perfusion leakage in 16 patients with malignant melanoma to be treated with perfusion of TNF-{alpha} and melphalan. Results: The field of view of the detector system was 13.8 cm, which is appropriate for the monitoring, since the area to be controlled was the precordial zone. The sensitivity of the system was 257 cps/MBq. When the percentage of leakage reaches 10% the associated absolute error is {+-}1%. After a mean follow-up period of 12 months, no patients have shown any significant or lasting side-effects. Partial or complete remission of lesions was seen in 9 out of 16 patients (56%) after HILP with TNF-{alpha} and melphalan. Conclusion: The detector system together with specially developed software provides a suitable automatic continuous monitoring system of any leakage that may occur during limb perfusion. This technique has been successfully implemented in patients for whom perfusion with TNF-{alpha} and melphalan has been indicated. (orig.)

  6. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    International Nuclear Information System (INIS)

    Lee, Inho; Oh, Jaesung; Oh, Jun-Ho; Kim, Inhyeok

    2017-01-01

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  7. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Inho [Institute for Human and Machine Cognition (IHMC), Florida (United States); Oh, Jaesung; Oh, Jun-Ho [Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Inhyeok [NAVER Green Factory, Seongnam (Korea, Republic of)

    2017-06-15

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  8. Fusion Power Measurement Using a Combined Neutron Spectrometer-Camera System at ITER

    International Nuclear Information System (INIS)

    Sjoestrand, Henrik; Sunden, E. Andersson; Conroy, S.; Ericsson, G.; Johnson, M. Gatu; Giacomelli, L.; Hellesen, C.; Hjalmarsson, A.; Ronchi, E.; Weiszflog, M.; Kaellne, J.

    2008-01-01

    A central task for fusion plasma diagnostics is to measure the 2.5 and 14 MeV neutron emission rate in order to determine the fusion power. A new method for determining the neutron yield has been developed at JET. It makes use of the magnetic proton recoil neutron spectrometer and a neutron camera and provides the neutron yield with small systematic errors. At ITER a similar system could operate if a high-resolution, high-performance neutron spectrometer similar to the MPR was installed. In this paper, we present how such system could be implemented and how well it would perform under different assumption of plasma scenarios and diagnostic capabilities. It is found that the systematic uncertainty for using such a system as an absolute calibration reference is as low as 3% and hence it would be an excellent candidate for the calibration of neutron monitors such as fission chambers. It is also shown that the system could provide a 1 ms time resolved estimation of the neutron rate with a total uncertainty of 5%

  9. Camera Calibration of Stereo Photogrammetric System with One-Dimensional Optical Reference Bar

    International Nuclear Information System (INIS)

    Xu, Q Y; Ye, D; Che, R S; Qi, X; Huang, Y

    2006-01-01

    To carry out the precise measurement of large-scale complex workpieces, accurately calibration of the stereo photogrammetric system has becoming more and more important. This paper proposed a flexible and reliable camera calibration of stereo photogrammetric system based on quaternion with one-dimensional optical reference bar, which has three small collinear infrared LED marks and the lengths between these marks have been precisely calibration. By moving the optical reference bar at a number of locations/orientations over the measurement volume, we calibrate the stereo photogrammetric systems with the geometric constraint of the optical reference bar. The extrinsic parameters calibration process consists of linear parameters estimation based on quaternion and nonlinear refinement based on the maximum likelihood criterion. Firstly, we linear estimate the extrinsic parameters of the stereo photogrameetric systems based on quaternion. Then with the quaternion results as the initial values, we refine the extrinsic parameters through maximum likelihood criterion with the Levenberg-Marquardt Algorithm. In the calibration process, we can automatically control the light intensity and optimize the exposure time to get uniform intensity profile of the image points at different distance and obtain higher S/N ratio. The experiment result proves that the calibration method proposed is flexible, valid and obtains good results in the application

  10. Applications of a Ga-68/Ge-68 generator system to brain imaging using a multiwire proportional chamber positron camera

    International Nuclear Information System (INIS)

    Hattner, R.S.; Lim, C.B.; Swann, S.J.; Kaufman, L.; Chu, D.; Perez-Mendez, V.

    1976-01-01

    A Ge-68/Ga-68 generator system has been applied to brain imaging in conjunction with a novel coincidence detection based positron camera. The camera consists of two opposed large area multiwire proportional chamber (MWPC) detectors interfaced to multichannel lead converter plates. Event localization is effected of delay lines. Ten patients with brain lesions have been studied 1-2 hours after the administration of Ga-68 formulated as DTPA. The images were compared to conventional brain scans, and to x-ray section scans (CAT). The positron studies have shown significant mitigation of confusing superficial activity resulting from craniotomy compared to conventional brain scans. Central necrosis of lesions observed in positron images, but not in the conventional scans has been confirmed in CAT. The economy of MWPC positron cameras combined with the ideal characteristics of the Ge-68/Ga-68 generator promise a cost efficient imaging system for the future

  11. System of image package input and output for neutron radiography based on PC IBM XT/AT; Sistema paketnogo vvoda i vyvoda izobrazhenij dlya nejtronnoj radiografii na baze PEhVM IBM PC XCT/AT

    Energy Technology Data Exchange (ETDEWEB)

    Avarzad, O; Rikhvitskij, V S

    1996-12-31

    System of acquisition and analysis of both statistic and dynamic images of neutron radiography includes NM IBM PC XT/AT, super vidicon, telecamera based video detector, color monitor and interface board of image input-output. 2 refs.

  12. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  13. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    Directory of Open Access Journals (Sweden)

    M. Simi

    2012-01-01

    Full Text Available A novel compliant Magnetic Levitation System (MLS for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The head module incorporates two motorized donut-shaped magnets and a miniaturized vision system at the tip. The compliant MLS can exploit the static external magnetic field to induce a smooth bending of the robotic head (0–80°, guaranteeing a wide span tilt motion of the point of view. A nonlinear mathematical model for compliant beam was developed and solved analytically in order to describe and predict the trajectory behaviour of the system for different structural parameters. The entire device is 95 mm long and 12.7 mm in diameter. Use of such a robot in single port or standard multiport laparoscopy could enable a reduction of the number or size of ancillary trocars, or increase the number of working devices that can be deployed, thus paving the way for multiple view point laparoscopy.

  14. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    Science.gov (United States)

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  15. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    Directory of Open Access Journals (Sweden)

    Donghun Kim

    2014-06-01

    Full Text Available In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user’s pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  16. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1 an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2 a plant counting method based on projection histograms; and (3 a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  17. Lane Departure System Design using with IR Camera for Night-time Road Conditions

    Directory of Open Access Journals (Sweden)

    Osman Onur Akırmak

    2015-02-01

    Full Text Available Today, one of the largest areas of research and development in the automobile industry is road safety. Many deaths and injuries occur every year on public roads from accidents caused by sleepy drivers, that technology could have been used to prevent. Lane detection at night-time is an important issue in driving assistance systems. This paper deals with vision-based lane detection and tracking at night-time. This project consists of a research and development of an algorithm for automotive systems to detect the departure of vehicle from out of lane. Once the situation is detected, a warning is issued to the driver with sound and visual message through “Head Up Display” (HUD system. The lane departure is detected through the images obtained from a single IR camera, which identifies the departure at a satisfactory accuracy via improved quality of video stream. Our experimental results and accuracy evaluation show that our algorithm has good precision and our detecting method is suitable for night-time road conditions.

  18. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  19. The effect of truncation on very small cardiac SPECT camera systems

    International Nuclear Information System (INIS)

    Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.

    2006-01-01

    Background: The limited transaxial field-of-view (FOV) of a very small cardiac SPECT camera system causes view-dependent truncation of the projection of structures exterior to, but near the heart. Basic tomographic principles suggest that the reconstruction of non-attenuated truncated data gives a distortion-free image in the interior of the truncated region, but the DC term of the Fourier spectrum of the reconstructed image is incorrect, meaning that the intensity scale of the reconstruction is inaccurate. The purpose of this study was to characterize the reconstructed image artifacts from truncated data, and to quantify their effects on the measurement of tracer uptake in the myocardial. Particular attention was given to instances where the heart wall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heart region. Truncated and non-truncated projections were formed both with and without attenuation. The reconstructions were analyzed for artifacts in the myocardium caused by truncation, and for the effect that attenuation has relative to increasing those artifacts. Results: The inaccuracy due to truncation is primarily caused by an incorrect DC component. For visualizing the left ventricular wall, this error is not worse than the effect of attenuation. The addition of a small hot bowel-like structure near the left ventricle causes few changes in counts on the wall. Larger artifacts due to the truncation are located at the boundary of the truncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstruction results than an analytical filtered back-projection reconstruction algorithm. Conclusion: Small inaccuracies in reconstructed images from small FOV camera systems should have little effect on clinical interpretation. However, changes in the degree of inaccuracy in counts from slice to slice are due to changes in

  20. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    Science.gov (United States)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  1. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Rizwan Ali Naqvi

    2018-02-01

    Full Text Available A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB. The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  2. New system for linear accelerator radiosurgery with a gantry-mounted video camera

    International Nuclear Information System (INIS)

    Kunieda, Etsuo; Kitamura, Masayuki; Kawaguchi, Osamu; Ohira, Takayuki; Ogawa, Kouichi; Ando, Yutaka; Nakamura, Kayoko; Kubo, Atsushi

    1998-01-01

    Purpose: We developed a positioning method that does not depend on the positioning mechanism originally annexed to the linac and investigated the positioning errors of the system. Methods and Materials: A small video camera was placed at a location optically identical to the linac x-ray source. A target pointer comprising a convex lens and bull's eye was attached to the arc of the Leksell stereotactic system so that the lens would form a virtual image of the bull's eye (virtual target) at the position of the center of the arc. The linac gantry and target pointer were placed at the side and top to adjust the arc center to the isocenter by referring the virtual target. Coincidence of the target and the isocenter could be confirmed in any combination of the couch and gantry rotation. In order to evaluate the accuracy of the positioning, a tungsten ball was attached to the stereotactic frame as a simulated target, which was repeatedly localized and repositioned to estimate the magnitude of the error. The center of the circular field defined by the collimator was marked on the film. Results: The differences between the marked centers of the circular field and the centers of the shadow of the simulated target were less than 0.3 mm

  3. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  4. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  5. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  6. Small Field of View Scintimammography Gamma Camera Integrated to a Stereotactic Core Biopsy Digital X-ray System

    Energy Technology Data Exchange (ETDEWEB)

    Andrew Weisenberger; Fernando Barbosa; T. D. Green; R. Hoefer; Cynthia Keppel; Brian Kross; Stanislaw Majewski; Vladimir Popov; Randolph Wojcik

    2002-10-01

    A small field of view gamma camera has been developed for integration with a commercial stereotactic core biopsy system. The goal is to develop and implement a dual-modality imaging system utilizing scintimammography and digital radiography to evaluate the reliability of scintimammography in predicting the malignancy of suspected breast lesions from conventional X-ray mammography. The scintimammography gamma camera is a custom-built mini gamma camera with an active area of 5.3 cm /spl times/ 5.3 cm and is based on a 2 /spl times/ 2 array of Hamamatsu R7600-C8 position-sensitive photomultiplier tubes. The spatial resolution of the gamma camera at the collimator surface is < 4 mm full-width at half-maximum and a sensitivity of /spl sim/ 4000 Hz/mCi. The system is also capable of acquiring dynamic scintimammographic data to allow for dynamic uptake studies. Sample images of preliminary clinical results are presented to demonstrate the performance of the system.

  7. A TV camera system for digitizing single shot oscillograms at sweep rate of 0.1 ns/cm

    International Nuclear Information System (INIS)

    Kienlen, M.; Knispel, G.; Miehe, J.A.; Sipp, B.

    1976-01-01

    A TV camera digitizing system associated with a 5 GHz photocell-oscilloscope apparatus allows the digitizing of single shot oscillograms; with an oscilloscope sweep rate of 0.1 ns/cm an accuracy on time measurements of 4 ps is obtained [fr

  8. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  9. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  10. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar; Heide, Felix; Heidrich, Wolfgang; Wetzstein, Gordon

    2016-01-01

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design

  11. Measurement system for high-sensitivity LIBS analysis using ICCD camera in LabVIEW environment

    International Nuclear Information System (INIS)

    Zaytsev, S M; Popov, A M; Zorov, N B; Labutin, T A

    2014-01-01

    A measurement system based on ultrafast (up to 10 ns time resolution) intensified CCD detector ''Nanogate-2V'' (Nanoscan, Russia) was developed for high-sensitivity analysis by Laser-Induced Breakdown Spectrometry (LIBS). LabVIEW environment provided a high level of compatibility with variety of electronic instruments and an easy development of user interface, while Visual Studio environment was used for creation of LabVIEW compatible dll library with the use of ''Nanogate-2V'' SDK. The program for camera management and laser-induced plasma spectra registration was created with the use of Call Library Node in LabVIEW. An algorithm of integration of the second device ADC ''PCI-9812'' (ADLINK) to the measurement system was proposed and successfully implemented. This allowed simultaneous registration of emission and acoustic signals under laser ablation. The measured resolving power of spectrometer-ICCD system was equal to 12000 at 632 nm. An electron density of laser plasma was estimated with the use of H-α Balmer line. Steel spectra obtained at different delays were used for selection of the optimal conditions for manganese analytical signal registration. The feature of accumulation of spectra from several laser pulses was shown. The accumulation allowed reliable observation of silver signal at 328.07 nm in the LIBS spectra of soil (C Ag = 4.5 ppm). Finally, the correlation between acoustic and emission signals of plasma was found. Thus, technical possibilities of the developed LIBS system were demonstrated both for plasma diagnostics and analytical measurements

  12. Scintillator-CCD camera system light output response to dosimetry parameters for proton beam range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Daftari, Inder K., E-mail: idaftari@radonc.ucsf.edu [Department of Radiation Oncology, 1600 Divisadero Street, Suite H1031, University of California-San Francisco, San Francisco, CA 94143 (United States); Castaneda, Carlos M.; Essert, Timothy [Crocker Nuclear Laboratory,1 Shields Avenue, University of California-Davis, Davis, CA 95616 (United States); Phillips, Theodore L.; Mishra, Kavita K. [Department of Radiation Oncology, 1600 Divisadero Street, Suite H1031, University of California-San Francisco, San Francisco, CA 94143 (United States)

    2012-09-11

    The purpose of this study is to investigate the luminescence light output response in a plastic scintillator irradiated by a 67.5 MeV proton beam using various dosimetry parameters. The relationship of the visible scintillator light with the beam current or dose rate, aperture size and the thickness of water in the water-column was studied. The images captured on a CCD camera system were used to determine optimal dosimetry parameters for measuring the range of a clinical proton beam. The method was developed as a simple quality assurance tool to measure the range of the proton beam and compare it to (a) measurements using two segmented ionization chambers and water column between them, and (b) with an ionization chamber (IC-18) measurements in water. We used a block of plastic scintillator that measured 5 Multiplication-Sign 5 Multiplication-Sign 5 cm{sup 3} to record visible light generated by a 67.5 MeV proton beam. A high-definition digital video camera Moticam 2300 connected to a PC via USB 2.0 communication channel was used to record images of scintillation luminescence. The brightness of the visible light was measured while changing beam current and aperture size. The results were analyzed to obtain the range and were compared with the Bragg peak measurements with an ionization chamber. The luminescence light from the scintillator increased linearly with the increase of proton beam current. The light output also increased linearly with aperture size. The relationship between the proton range in the scintillator and the thickness of the water column showed good linearity with a precision of 0.33 mm (SD) in proton range measurement. For the 67.5 MeV proton beam utilized, the optimal parameters for scintillator light output response were found to be 15 nA (16 Gy/min) and an aperture size of 15 mm with image integration time of 100 ms. The Bragg peak depth brightness distribution was compared with the depth dose distribution from ionization chamber measurements

  13. The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.

    Science.gov (United States)

    O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard

    2010-08-01

    The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.

  14. A Novel Indoor Mobile Localization System Based on Optical Camera Communication

    Directory of Open Access Journals (Sweden)

    Md. Tanvir Hossan

    2018-01-01

    Full Text Available Localizing smartphones in indoor environments offers excellent opportunities for e-commence. In this paper, we propose a localization technique for smartphones in indoor environments. This technique can calculate the coordinates of a smartphone using existing illumination infrastructure with light-emitting diodes (LEDs. The system can locate smartphones without further modification of the existing LED light infrastructure. Smartphones do not have fixed position and they may move frequently anywhere in an environment. Our algorithm uses multiple (i.e., more than two LED lights simultaneously. The smartphone gets the LED-IDs from the LED lights that are within the field of view (FOV of the smartphone’s camera. These LED-IDs contain the coordinate information (e.g., x- and y-coordinate of the LED lights. Concurrently, the pixel area on the image sensor (IS of projected image changes with the relative motion between the smartphone and each LED light which allows the algorithm to calculate the distance from the smartphone to that LED. At the end of this paper, we present simulated results for predicting the next possible location of the smartphone using Kalman filter to minimize the time delay for coordinate calculation. These simulated results demonstrate that the position resolution can be maintained within 10 cm.

  15. Intraoperative implant rod three-dimensional geometry measured by dual camera system during scoliosis surgery.

    Science.gov (United States)

    Salmingo, Remel Alingalan; Tadano, Shigeru; Abe, Yuichiro; Ito, Manabu

    2016-05-12

    Treatment for severe scoliosis is usually attained when the scoliotic spine is deformed and fixed by implant rods. Investigation of the intraoperative changes of implant rod shape in three-dimensions is necessary to understand the biomechanics of scoliosis correction, establish consensus of the treatment, and achieve the optimal outcome. The objective of this study was to measure the intraoperative three-dimensional geometry and deformation of implant rod during scoliosis corrective surgery.A pair of images was obtained intraoperatively by the dual camera system before rotation and after rotation of rods during scoliosis surgery. The three-dimensional implant rod geometry before implantation was measured directly by the surgeon and after surgery using a CT scanner. The images of rods were reconstructed in three-dimensions using quintic polynomial functions. The implant rod deformation was evaluated using the angle between the two three-dimensional tangent vectors measured at the ends of the implant rod.The implant rods at the concave side were significantly deformed during surgery. The highest rod deformation was found after the rotation of rods. The implant curvature regained after the surgical treatment.Careful intraoperative rod maneuver is important to achieve a safe clinical outcome because the intraoperative forces could be higher than the postoperative forces. Continuous scoliosis correction was observed as indicated by the regain of the implant rod curvature after surgery.

  16. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-10-01

    Full Text Available Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD, is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor based on the observations of the researchers about the difference between real (live and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR camera-based finger-vein recognition system using convolutional neural network (CNN to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA for dimensionality reduction of feature space and support vector machine (SVM for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared

  17. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    Science.gov (United States)

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN

  18. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  19. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    International Nuclear Information System (INIS)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S

    2016-01-01

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm"3) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  20. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  1. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  2. A double photomultiplier Compton camera and its readout system for mice imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, Cristiano Lino [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Padova, Via Marzolo 8, Padova 35131 (Italy); Atroshchenko, Kostiantyn [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Legnaro, Viale dell' Universita 2, Legnaro PD 35020 (Italy); Baldazzi, Giuseppe [Physics Department, University of Bologna, Viale Berti Pichat 6/2, Bologna 40127, Italy and INFN Bologna, Viale Berti Pichat 6/2, Bologna 40127 (Italy); Bello, Michele [INFN Legnaro, Viale dell' Universita 2, Legnaro PD 35020 (Italy); Uzunov, Nikolay [Department of Natural Sciences, Shumen University, 115 Universitetska str., Shumen 9712, Bulgaria and INFN Legnaro, Viale dell' Universita 2, Legnaro PD 35020 (Italy); Di Domenico, Giovanni [Physics Department, University of Ferrara, Via Saragat 1, Ferrara 44122 (Italy) and INFN Ferrara, Via Saragat 1, Ferrara 44122 (Italy)

    2013-04-19

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the 'electronic collimation', i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a 'cone' of possible incident directions are obtained (event with 'incomplete geometry'). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  3. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  4. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  5. Dual-head gamma camera system for intraoperative localization of radioactive seeds

    International Nuclear Information System (INIS)

    Arsenali, B; Viergever, M A; Gilhuijs, K G A; De Jong, H W A M; Beijst, C; Dickerscheid, D B M

    2015-01-01

    Breast-conserving surgery is a standard option for the treatment of patients with early-stage breast cancer. This form of surgery may result in incomplete excision of the tumor. Iodine-125 labeled titanium seeds are currently used in clinical practice to reduce the number of incomplete excisions. It seems likely that the number of incomplete excisions can be reduced even further if intraoperative information about the location of the radioactive seed is combined with preoperative information about the extent of the tumor. This can be combined if the location of the radioactive seed is established in a world coordinate system that can be linked to the (preoperative) image coordinate system. With this in mind, we propose a radioactive seed localization system which is composed of two static ceiling-suspended gamma camera heads and two parallel-hole collimators. Physical experiments and computer simulations which mimic realistic clinical situations were performed to estimate the localization accuracy (defined as trueness and precision) of the proposed system with respect to collimator-source distance (ranging between 50 cm and 100 cm) and imaging time (ranging between 1 s and 10 s). The goal of the study was to determine whether or not a trueness of 5 mm can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (these specifications were defined by a group of dedicated breast cancer surgeons). The results from the experiments indicate that the location of the radioactive seed can be established with an accuracy of 1.6 mm  ±  0.6 mm if a collimator-source distance of 50 cm and imaging time of 5 s are used (these experiments were performed with a 4.5 cm thick block phantom). Furthermore, the results from the simulations indicate that a trueness of 3.2 mm or less can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (this trueness was achieved for all 14 breast phantoms which

  6. Calibration of gamma camera systems for a multicentre European {sup 123}I-FP-CIT SPECT normal database

    Energy Technology Data Exchange (ETDEWEB)

    Tossici-Bolt, Livia [Southampton Univ. Hospitals NHS Trust, Dept. of Medical Physics and Bioengineering, Southampton (United Kingdom); Dickson, John C. [UCLH NHS Foundation Trust and Univ. College London, Institute of Nuclear Medicine, London (United Kingdom); Sera, Terez [Univ. of Szeged, Dept. of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Nijs, Robin de [Rigshospitalet and Univ. of Copenhagen, Neurobiology Research Unit, Copenhagen (Denmark); Bagnara, Maria Claudia [Az. Ospedaliera Universitaria S. Martino, Medical Physics Unit, Genoa (Italy); Jonsson, Cathrine [Karolinska Univ. Hospital, Dept. of Nuclear Medicine, Medical Physics, Stockholm (Sweden); Scheepers, Egon [Univ. of Amsterdam, Dept. of Nuclear Medicine, Academic Medical Centre, Amsterdam (Netherlands); Zito, Felicia [Fondazione IRCCS Granda, Ospedale Maggiore Policlinico, Dept. of Nuclear Medicine, Milan (Italy); Seese, Anita [Univ. of Leipzig, Dept. of Nuclear Medicine, Leipzig (Germany); Koulibaly, Pierre Malick [Univ. of Nice-Sophia Antipolis, Nuclear Medicine Dept., Centre Antoine Lacassagne, Nice (France); Kapucu, Ozlem L. [Gazi Univ., Faculty of Medicine, Dept. of Nuclear Medicine, Ankara (Turkey); Koole, Michel [Univ. Hospital and K.U. Leuven, Nuclear Medicine, Leuven (Belgium); Raith, Maria [Medical Univ. of Vienna, Dept. of Nuclear Medicine, Vienna (Austria); George, Jean [Univ. Catholique Louvain, Nuclear Medicine Division, Mont-Godinne Medical Center, Mont-Godinne (Belgium); Lonsdale, Markus Nowak [Bispebjerg Univ. Hospital, Dept. of Clinical Physiology and Nuclear Medicine, Copenhagen (Denmark); Muenzing, Wolfgang [Univ. of Munich, Dept. of Nuclear Medicine, Munich (Germany); Tatsch, Klaus [Univ. of Munich, Dept. of Nuclear Medicine, Munich (Germany); Municipal Hospital of Karlsruhe Inc., Dept. of Nuclear Medicine, Karlsruhe (Germany); Varrone, Andrea [Center for Psychiatric Research, Karolinska Inst., Dept. of Clinical Neuroscience, Stockholm (Sweden)

    2011-08-15

    A joint initiative of the European Association of Nuclear Medicine (EANM) Neuroimaging Committee and EANM Research Ltd. aimed to generate a European database of [{sup 123}I]FP-CIT single photon emission computed tomography (SPECT) scans of healthy controls. This study describes the characterization and harmonization of the imaging equipment of the institutions involved. {sup 123}I SPECT images of a striatal phantom filled with striatal to background ratios between 10:1 and 1:1 were acquired on all the gamma cameras with absolute ratios measured from aliquots. The images were reconstructed by a core lab using ordered subset expectation maximization (OSEM) without corrections (NC), with attenuation correction only (AC) and additional scatter and septal penetration correction (ACSC) using the triple energy window method. A quantitative parameter, the simulated specific binding ratio (sSBR), was measured using the ''Southampton'' methodology that accounts for the partial volume effect and compared against the actual values obtained from the aliquots. Camera-specific recovery coefficients were derived from linear regression and the error of the measurements was evaluated using the coefficient of variation (COV). The relationship between measured and actual sSBRs was linear across all systems. Variability was observed between different manufacturers and, to a lesser extent, between cameras of the same type. The NC and AC measurements were found to underestimate systematically the actual sSBRs, while the ACSC measurements resulted in recovery coefficients close to 100% for all cameras (AC range 69-89%, ACSC range 87-116%). The COV improved from 46% (NC) to 32% (AC) and to 14% (ACSC) (p < 0.001). A satisfactory linear response was observed across all cameras. Quantitative measurements depend upon the characteristics of the SPECT systems and their calibration is a necessary prerequisite for data pooling. Together with accounting for partial volume, the

  7. SU-E-T-774: Use of a Scintillator-Mirror-Camera System for the Measurement of MLC Leakage Radiation with the CyberKnife M6 System

    Energy Technology Data Exchange (ETDEWEB)

    Goggin, L; Kilby, W; Noll, M; Maurer, C [Accuray Inc, Sunnyvale, CA (United States)

    2015-06-15

    Purpose: A technique using a scintillator-mirror-camera system to measure MLC leakage was developed to provide an efficient alternative to film dosimetry while maintaining high spatial resolution. This work describes the technique together with measurement uncertainties. Methods: Leakage measurements were made for the InCise™ MLC using the Logos XRV-2020A device. For each measurement approximately 170 leakage and background images were acquired using optimized camera settings. Average background was subtracted from each leakage frame before filtering the integrated leakage image to replace anomalous pixels. Pixel value to dose conversion was performed using a calibration image. Mean leakage was calculated within an ROI corresponding to the primary beam, and maximum leakage was determined by binning the image into overlapping 1mm x 1mm ROIs. 48 measurements were performed using 3 cameras and multiple MLC-linac combinations in varying beam orientations, with each compared to film dosimetry. Optical and environmental influences were also investigated. Results: Measurement time with the XRV-2020A was 8 minutes vs. 50 minutes using radiochromic film, and results were available immediately. Camera radiation exposure degraded measurement accuracy. With a relatively undamaged camera, mean leakage agreed with film measurement to ≤0.02% in 92% cases, ≤0.03% in 100% (for maximum leakage the values were 88% and 96%) relative to reference open field dose. The estimated camera lifetime over which this agreement is maintained is at least 150 measurements, and can be monitored using reference field exposures. A dependency on camera temperature was identified and a reduction in sensitivity with distance from image center due to optical distortion was characterized. Conclusion: With periodic monitoring of the degree of camera radiation damage, the XRV-2020A system can be used to measure MLC leakage. This represents a significant time saving when compared to the traditional

  8. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    Science.gov (United States)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and

  9. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies.

    Science.gov (United States)

    Buchheit, Martin; Allen, Adam; Poon, Tsz Kit; Modonutti, Mattia; Gregson, Warren; Di Salvo, Valter

    2014-12-01

    Abstract During the past decade substantial development of computer-aided tracking technology has occurred. Therefore, we aimed to provide calibration equations to allow the interchangeability of different tracking technologies used in soccer. Eighty-two highly trained soccer players (U14-U17) were monitored during training and one match. Player activity was collected simultaneously with a semi-automatic multiple-camera (Prozone), local position measurement (LPM) technology (Inmotio) and two global positioning systems (GPSports and VX). Data were analysed with respect to three different field dimensions (small, systems were compared, and calibration equations (linear regression models) between each system were calculated for each field dimension. Most metrics differed between the 4 systems with the magnitude of the differences dependant on both pitch size and the variable of interest. Trivial-to-small between-system differences in total distance were noted. However, high-intensity running distance (>14.4 km · h -1 ) was slightly-to-moderately greater when tracked with Prozone, and accelerations, small-to-very largely greater with LPM. For most of the equations, the typical error of the estimate was of a moderate magnitude. Interchangeability of the different tracking systems is possible with the provided equations, but care is required given their moderate typical error of the estimate.

  10. The Advanced Gamma-ray Imaging System (AGIS) - Camera Electronics Development

    Science.gov (United States)

    Tajima, Hiroyasu; Bechtol, K.; Buehler, R.; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Hanna, D.; Horan, D.; Humensky, B.; Karlsson, N.; Kieda, D.; Konopelko, A.; Krawczynski, H.; Krennrich, F.; Mukherjee, R.; Ong, R.; Otte, N.; Quinn, J.; Schroedter, M.; Swordy, S.; Wagner, R.; Wakely, S.; Weinstein, A.; Williams, D.; Camera Working Group; AGIS Collaboration

    2010-03-01

    AGIS, a next-generation imaging atmospheric Cherenkov telescope (IACT) array, aims to achieve a sensitivity level of about one milliCrab for gamma-ray observations in the energy band of 50 GeV to 100 TeV. Achieving this level of performance will require on the order of 50 telescopes with perhaps as many as 1M total electronics channels. The larger scale of AGIS requires a very different approach from the currently operating IACTs, with lower-cost and lower-power electronics incorporated into camera modules designed for high reliability and easy maintenance. Here we present the concept and development status of the AGIS camera electronics.

  11. 2D turbulence structure observed by a fast framing camera system in linear magnetized device PANTA

    International Nuclear Information System (INIS)

    Ohdachi, Satoshi; Inagaki, S.; Kobayashi, T.; Goto, M.

    2015-01-01

    Mesoscale structure, such as the zonal flow and the streamer plays important role in the drift-wave turbulence. The interaction of the mesoscale structure and the turbulence is not only interesting phenomena but also a key to understand the turbulence driven transport in the magnetically confined plasmas. In the cylindrical magnetized device, PANTA, the interaction of the streamer and the drift wave has been found by the bi-spectrum analysis of the turbulence. In order to study the mesoscale physics directly, the 2D turbulence is studied by a fast-framing visible camera system view from a window located at the end plate of the device. The parameters of the plasma is the following; Te∼3eV, n ∼ 1x10 19 m -3 , Ti∼0.3eV, B=900G, Neutral pressure P n =0.8 mTorr, a∼ 6cm, L=4m, Helicon source (7MHz, 3kW). Fluctuating component of the visible image is decomposed by the Fourier-Bessel expansion method. Several rotating mode is observed simultaneously. From the images, m = 1 (f∼0.7 kHz) and m = 2, 3 (f∼-3.4 kHz) components which rotate in the opposite direction can be easily distinguished. Though the modes rotate constantly in most time, there appear periods where the radially complicated node structure is formed (for example, m=3 component, t = 142.5∼6 in the figure) and coherent mode structures are disturbed. Then, a new rotating period is started again with different phase of the initial rotation until the next event happens. The typical time interval of the event is 0.5 to 1.0 times of the one rotation of the slow m = 1 mode. The wave-wave interaction might be interrupted occasionally. Detailed analysis of the turbulence using imaging technique will be discussed. (author)

  12. HiRes camera and LIDAR ranging system for the Clementine mission

    Energy Technology Data Exchange (ETDEWEB)

    Ledebuhr, A.G.; Kordas, J.F.; Lewis, I.T. [and others

    1995-04-01

    Lawrence Livermore National Laboratory developed a space-qualified High Resolution (HiRes) imaging LIDAR (Light Detection And Ranging) system for use on the DoD Clementine mission. The Clementine mission provided more than 1.7 million images of the moon, earth, and stars, including the first ever complete systematic surface mapping of the moon from the ultra-violet to near-infrared spectral regions. This article describes the Clementine HiRes/LIDAR system, discusses design goals and preliminary estimates of on-orbit performance, and summarizes lessons learned in building and using the sensor. The LIDAR receiver system consists of a High Resolution (HiRes) imaging channel which incorporates an intensified multi-spectral visible camera combined with a Laser ranging channel which uses an avalanche photo-diode for laser pulse detection and timing. The receiver was bore sighted to a light-weight McDonnell-Douglas diode-pumped ND:YAG laser transmitter that emmitted 1.06 {micro}m wavelength pulses of 200 mJ/pulse and 10 ns pulse-width, The LIDAR receiver uses a common F/9.5 Cassegrain telescope assembly. The optical path of the telescope is split using a color-separating beamsplitter. The imaging channel incorporates a filter wheel assembly which spectrally selects the light which is imaged onto a custom 12 mm gated image intensifier fiber-optically-coupled into a 384 x 276 pixel frame transfer CCD FPA. The image intensifier was spectrally sensitive over the 0.4 to 0.8 {micro}m wavelength region. The six-position filter wheel contained 4 narrow spectral filters, one broadband and one blocking filter. At periselene (400 km) the HiRes/LIDAR imaged a 2.8 km swath width at 20-meter resolution. The LIDAR function detected differential signal return with a 40-meter range accuracy, with a maximum range capability of 640 km, limited by the bit counter in the range return counting clock.

  13. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  14. Dynamical scene analysis with a moving camera: mobile targets detection system

    International Nuclear Information System (INIS)

    Hennebert, Christine

    1996-01-01

    This thesis work deals with the detection of moving objects in monocular image sequences acquired with a mobile camera. We propose a method able to detect small moving objects in visible or infrared images of real outdoor scenes. In order to detect objects of very low apparent motion, we consider an analysis on a large temporal interval. We have chosen to compensate for the dominant motion due to the camera displacement for several consecutive images in order to form a sub-sequence of images for which the camera seems virtually static. We have also developed a new approach allowing to extract the different layers of a real scene in order to deal with cases where the 2D motion due to the camera displacement cannot be globally compensated for. To this end, we use a hierarchical model with two levels: the local merging step and the global merging one. Then, an appropriate temporal filtering is applied to registered image sub-sequence to enhance signals corresponding to moving objects. The detection issue is stated as a labeling problem within a statistical regularization based on Markov Random Fields. Our method has been validated on numerous real image sequences depicting complex outdoor scenes. Finally, the feasibility of an integrated circuit for mobile object detection has been proved. This circuit could lead to an ASIC creation. (author) [fr

  15. Home video monitoring system for neurodegenerative diseases based on commercial HD cameras

    NARCIS (Netherlands)

    Abramiuc, B.; Zinger, S.; De With, P.H.N.; De Vries-Farrouh, N.; Van Gilst, M.M.; Bloem, B.; Overeem, S.

    2016-01-01

    Neurodegenerative disease (ND) is an umbrella term for chronic disorders that are characterized by severe joint cognitive-motor impairments, which are difficult to evaluate on a frequent basis. HD cameras in the home environment could extend and enhance the diagnosis process and could lead to better

  16. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  17. Tests of a new CCD-camera based neutron radiography detector system at the reactor stations in Munich and Vienna

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, E; Pleinert, H [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Schillinger, B [Technische Univ. Muenchen (Germany); Koerner, S [Atominstitut der Oesterreichischen Universitaeten, Vienna (Austria)

    1997-09-01

    The performance of the new neutron radiography detector designed at PSI with a cooled high sensitive CCD-camera was investigated under real neutronic conditions at three beam ports of two reactor stations. Different converter screens were applied for which the sensitivity and the modulation transfer function (MTF) could be obtained. The results are very encouraging concerning the utilization of this detector system as standard tool at the radiography stations at the spallation source SINQ. (author) 3 figs., 5 refs.

  18. A Projector-Camera System for Augmented Card Playing and a Case Study with the Pelmanism Game

    Directory of Open Access Journals (Sweden)

    Nozomu Tanaka

    2017-05-01

    Full Text Available In this article, we propose a system for augmented card playing with a projector and a camera to add playfulness and increase communication among players of a traditional card game. The functionalities were derived on the basis of a user survey session with actual players. Playing cards are recognized using a video camera on the basis of a template matching without any artificial marker with an accuracy of > 0.96. Players are also tracked to provide person-dependent services using a video camera from the direction of their hands appearing over a table. These functions are provided as an API; therefore, the user of our system, i.e., a developer, can easily augment playing card games. The Pelmanism game was augmented on top of the system to validate the concept of augmentation. The results showed the feasibility of the system’s performance in an actual environment and the potential of enhancing playfulness and communication among players.

  19. Measurement of the iodine uptake by the thyroid: comparative analysis between the gamma camera system with 'pinhole' collimator and 13S002 system

    International Nuclear Information System (INIS)

    Silva, Carlos Borges da; Mello, Rossana Corbo R. de; Rebelo, Ana Maria O.

    2002-01-01

    The thyroid uptake measurements are common in medical uses and are considered a direct and precise form of diagnostic, however, different results have been observed as measurements of thyroid uptake are taken using distinct equipment. This study attempts to find the cause of the differences between a thyroid uptake probe and a gamma camera. These discrepancies can be associated to the different patients samples, equipment's problems or operator procedures errors. This work presents the results of comparative uptake measurements performed in a neck phantom and a 4-hour thyroid uptake study in 40 patients, using a Gamma Camera Ohio Nuclear model Sigma 410 with a pinhole collimator and Nuclear Medicine System model 13S002, developed by Instituto de Engenharia Nuclear. The results observed show that in spite of non satisfactory results commented in literature, both the System 13S002 and System Gamma Camera Ohio can be used in uptake thyroid diagnostic with statistical confidence degree of 99 %. (author)

  20. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    Science.gov (United States)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  1. Morphodynamic study of haemophilic joint diseases with the scintillation camera and a whole-body scintigraphy system

    International Nuclear Information System (INIS)

    Koutoulidis, C.; Papathanassiou, B.T.; Kambouroglou, G.; Louisou, C.; Mandalaki, T.

    1975-01-01

    Joint lesions in haemophilics were studied with a new whole-body scintigraphic system which combines the scintillation camera and an automatic travelling table. The results obtained are compared with clinical and radiographic data and an attempt is made to explain the mechanism of tracer uptake (sup(99m)Tc-tin-pyrophosphate) on the lesions. This system is found to offer great advantages over traditional systems for the study of haemophilic joint diseases since all the joints affected can be estimated in a very short time, a very important point in following the progress of the lesion, preventing relapses and checking the efficiency of the treatment [fr

  2. Calibration of robot tool centre point using camera-based system

    Directory of Open Access Journals (Sweden)

    Gordić Zaviša

    2016-01-01

    Full Text Available Robot Tool Centre Point (TCP calibration problem is of great importance for a number of industrial applications, and it is well known both in theory and in practice. Although various techniques have been proposed for solving this problem, they mostly require tool jogging or long processing time, both of which affect process performance by extending cycle time. This paper presents an innovative way of TCP calibration using a set of two cameras. The robot tool is placed in an area where images in two orthogonal planes are acquired using cameras. Using robust pattern recognition, even deformed tool can be identified on images, and information about its current position and orientation forwarded to control unit for calibration. Compared to other techniques, test results show significant reduction in procedure complexity and calibration time. These improvements enable more frequent TCP checking and recalibration during production, thus improving the product quality.

  3. Speed of sound and photoacoustic imaging with an optical camera based ultrasound detection system

    Science.gov (United States)

    Nuster, Robert; Paltauf, Guenther

    2017-07-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  4. Opto-mechanical design of the G-CLEF flexure control camera system

    Science.gov (United States)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  5. Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system

    Science.gov (United States)

    Hesterman, Jacob Y.; Kupinski, Matthew A.; Furenlid, Lars R.; Wilson, Donald W.

    2005-04-01

    We have previously utilized lumpy object models and simulated imaging systems in conjunction with the ideal observer to compute figures of merit for hardware optimization. In this paper, we describe the development of methods and phantoms necessary to validate or experimentally carry out these optimizations. Our study was conducted on a four-camera small-animal SPECT system that employs interchangeable pinhole plates to operate under a variety of pinhole configurations and magnifications (representing optimizable system parameters). We developed a small-animal phantom capable of producing random backgrounds for each image sequence. The task chosen for the study was the detection of a 2mm diameter sphere within the phantom-generated random background. A total of 138 projection images were used, half of which included the signal. As our observer, we employed the channelized Hotelling observer (CHO) with Laguerre-Gauss channels. The signal-to-noise (SNR) of this observer was used to compare different system configurations. Results indicate agreement between experimental and simulated data with higher detectability rates found for multiple-camera, multiple-pinhole, and high-magnification systems, although it was found that mixtures of magnifications often outperform systems employing a single magnification. This work will serve as a basis for future studies pertaining to system hardware optimization.

  6. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  7. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    Science.gov (United States)

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  8. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  9. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  10. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  11. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    International Nuclear Information System (INIS)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A.E.; Engelhardt, M.

    2005-01-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2x10 7 cm -2 s -1 , which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300x1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points

  12. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    Science.gov (United States)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  13. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  14. Strategy for the Development of a Smart NDVI Camera System for Outdoor Plant Detection and Agricultural Embedded Systems

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2013-01-01

    Full Text Available The application of (smart cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR and the red channel optical frequency band. Two aligned charge coupled device (CCD chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.

  15. Strategy for the development of a smart NDVI camera system for outdoor plant detection and agricultural embedded systems.

    Science.gov (United States)

    Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe

    2013-01-24

    The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.

  16. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  17. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-06-01

    Full Text Available For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP system combining Multi-View Stereovision (MVS with the Structure from Motion (SfM algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98 and 0.57 mm (R2 = 0.99, respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency.

  18. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    Science.gov (United States)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  19. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Science.gov (United States)

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  20. An Improved Indoor Positioning System Using RGB-D Cameras and Wireless Networks for Use in Complex Environments

    Directory of Open Access Journals (Sweden)

    Jaime Duque Domingo

    2017-10-01

    Full Text Available This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps, delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.

  1. ORIS: the Oak Ridge Imaging System program listings. [Nuclear medicine imaging with rectilinear scanner and gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Bell, P. R.; Dougherty, J. M.

    1978-04-01

    The Oak Ridge Imaging System (ORIS) is a general purpose access, storage, processing and display system for nuclear medicine imaging with rectilinear scanner and gamma camera. This volume contains listings of the PDP-8/E version of ORIS Version 2. The system is designed to run under the Digital Equipment Corporation's OS/8 monitor in 16K or more words of core. System and image file mass storage is on RK8E disk; longer-time image file storage is provided on DECtape. Another version of this program exists for use with the RF08 disk, and a more limited version is for DECtape only. This latter version is intended for non-medical imaging.

  2. The development of a virtual camera system for astronaut-rover planetary exploration.

    Science.gov (United States)

    Platt, Donald W; Boy, Guy A

    2012-01-01

    A virtual assistant is being developed for use by astronauts as they use rovers to explore the surface of other planets. This interactive database, called the Virtual Camera (VC), is an interactive database that allows the user to have better situational awareness for exploration. It can be used for training, data analysis and augmentation of actual surface exploration. This paper describes the development efforts and Human-Computer Interaction considerations for implementing a first-generation VC on a tablet mobile computer device. Scenarios for use will be presented. Evaluation and success criteria such as efficiency in terms of processing time and precision situational awareness, learnability, usability, and robustness will also be presented. Initial testing and the impact of HCI design considerations of manipulation and improvement in situational awareness using a prototype VC will be discussed.

  3. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  4. Adaptive strategies of remote systems operators exposed to perturbed camera-viewing conditions

    Science.gov (United States)

    Stuart, Mark A.; Manahan, Meera K.; Bierschwale, John M.; Sampaio, Carlos E.; Legendre, A. J.

    1991-01-01

    This report describes a preliminary investigation of the use of perturbed visual feedback during the performance of simulated space-based remote manipulation tasks. The primary objective of this NASA evaluation was to determine to what extent operators exhibit adaptive strategies which allow them to perform these specific types of remote manipulation tasks more efficiently while exposed to perturbed visual feedback. A secondary objective of this evaluation was to establish a set of preliminary guidelines for enhancing remote manipulation performance and reducing the adverse effects. These objectives were accomplished by studying the remote manipulator performance of test subjects exposed to various perturbed camera-viewing conditions while performing a simulated space-based remote manipulation task. Statistical analysis of performance and subjective data revealed that remote manipulation performance was adversely affected by the use of perturbed visual feedback and performance tended to improve with successive trials in most perturbed viewing conditions.

  5. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  6. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  7. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  8. Invention and validation of an automated camera system that uses optical character recognition to identify patient name mislabeled samples.

    Science.gov (United States)

    Hawker, Charles D; McCarthy, William; Cleveland, David; Messinger, Bonnie L

    2014-03-01

    Mislabeled samples are a serious problem in most clinical laboratories. Published error rates range from 0.39/1000 to as high as 1.12%. Standardization of bar codes and label formats has not yet achieved the needed improvement. The mislabel rate in our laboratory, although low compared with published rates, prompted us to seek a solution to achieve zero errors. To reduce or eliminate our mislabeled samples, we invented an automated device using 4 cameras to photograph the outside of a sample tube. The system uses optical character recognition (OCR) to look for discrepancies between the patient name in our laboratory information system (LIS) vs the patient name on the customer label. All discrepancies detected by the system's software then require human inspection. The system was installed on our automated track and validated with production samples. We obtained 1 009 830 images during the validation period, and every image was reviewed. OCR passed approximately 75% of the samples, and no mislabeled samples were passed. The 25% failed by the system included 121 samples actually mislabeled by patient name and 148 samples with spelling discrepancies between the patient name on the customer label and the patient name in our LIS. Only 71 of the 121 mislabeled samples detected by OCR were found through our normal quality assurance process. We have invented an automated camera system that uses OCR technology to identify potential mislabeled samples. We have validated this system using samples transported on our automated track. Full implementation of this technology offers the possibility of zero mislabeled samples in the preanalytic stage.

  9. FPGA-Based HD Camera System for the Micropositioning of Biomedical Micro-Objects Using a Contactless Micro-Conveyor

    Directory of Open Access Journals (Sweden)

    Elmar Yusifli

    2017-03-01

    Full Text Available With recent advancements, micro-object contactless conveyers are becoming an essential part of the biomedical sector. They help avoid any infection and damage that can occur due to external contact. In this context, a smart micro-conveyor is devised. It is a Field Programmable Gate Array (FPGA-based system that employs a smart surface for conveyance along with an OmniVision complementary metal-oxide-semiconductor (CMOS HD camera for micro-object position detection and tracking. A specific FPGA-based hardware design and VHSIC (Very High Speed Integrated Circuit Hardware Description Language (VHDL implementation are realized. It is done without employing any Nios processor or System on a Programmable Chip (SOPC builder based Central Processing Unit (CPU core. It keeps the system efficient in terms of resource utilization and power consumption. The micro-object positioning status is captured with an embedded FPGA-based camera driver and it is communicated to the Image Processing, Decision Making and Command (IPDC module. The IPDC is programmed in C++ and can run on a Personal Computer (PC or on any appropriate embedded system. The IPDC decisions are sent back to the FPGA, which pilots the smart surface accordingly. In this way, an automated closed-loop system is employed to convey the micro-object towards a desired location. The devised system architecture and implementation principle is described. Its functionality is also verified. Results have confirmed the proper functionality of the developed system, along with its outperformance compared to other solutions.

  10. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    Directory of Open Access Journals (Sweden)

    Antonio Lagudi

    2016-04-01

    Full Text Available The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

  11. Measurement of liquid film flow on nuclear rod bundle in micro-scale by using very high speed camera system

    Science.gov (United States)

    Pham, Son; Kawara, Zensaku; Yokomine, Takehiko; Kunugi, Tomoaki

    2012-11-01

    Playing important roles in the mass and heat transfer as well as the safety of boiling water reactor, the liquid film flow on nuclear fuel rods has been studied by different measurement techniques such as ultrasonic transmission, conductivity probe, etc. Obtained experimental data of this annular two-phase flow, however, are still not enough to construct the physical model for critical heat flux analysis especially at the micro-scale. Remain problems are mainly caused by complicated geometry of fuel rod bundles, high velocity and very unstable interface behavior of liquid and gas flow. To get over these difficulties, a new approach using a very high speed digital camera system has been introduced in this work. The test section simulating a 3×3 rectangular rod bundle was made of acrylic to allow a full optical observation of the camera. Image data were taken through Cassegrain optical system to maintain the spatiotemporal resolution up to 7 μm and 20 μs. The results included not only the real-time visual information of flow patterns, but also the quantitative data such as liquid film thickness, the droplets' size and speed distributions, and the tilt angle of wavy surfaces. These databases could contribute to the development of a new model for the annular two-phase flow. Partly supported by the Global Center of Excellence (G-COE) program (J-051) of MEXT, Japan.

  12. A Proposal and Evaluation of Security Camera System at a Car Park in an Ad-Hoc Network

    Science.gov (United States)

    Uemura, Wataru; Murata, Masashi

    In recent year, ad-hoc network technology has gained attention, which consists of not access points and base stations but of wireless nodes. In this network, it is difficult to maintain the whole data flow because of the absence of access points as the network administrator when nodes share the data. This paper proposes the security camera system which has only nodes sharing the taken pictures and has the robustness against the data destroying. The sender node cannot know whether packets are received or not by neighboring nodes in broadcasting because of a unidirectional communication. So in our proposed method, the sender node selects the receiver node from neighboring nodes, and they communicate with each other. On the other hand, neighboring nodes listen to packets between the sender node and the receiver node. After that, this method guarantees nodes of more than 1 which receive a data in broadcasting. We construct the security camera system using wireless nodes with the IEEE 802.15.4 specification and show the performance for security. At last, using the simulator we show the efficiency in the large environment, and conclude this paper.

  13. Diagnostic performance of a novel cadmium-zinc-telluride gamma camera system assessed using fractional flow reserve.

    Science.gov (United States)

    Tanaka, Hirokazu; Chikamori, Taishiro; Tanaka, Nobuhiro; Hida, Satoshi; Igarashi, Yuko; Yamashita, Jun; Ogawa, Masashi; Shiba, Chie; Usui, Yasuhiro; Yamashina, Akira

    2014-01-01

    Although the novel cadmium-zinc-telluride (CZT) camera system provides excellent image quality, its diagnostic value using thallium-201 as assessed on coronary angiography (CAG) and fractional flow reserve (FFR) has not been validated. METHODS AND RESULTS: To evaluate the diagnostic accuracy of the CZT ultrafast camera system (Discovery NM 530c), 95 patients underwent stress thallium-201 single-photon emission computed tomography (SPECT) and then CAG within 3 months. Image acquisition was performed in the supine and prone positions after stress for 5 and 3 min, respectively, and in the supine position at rest for 10 min. Significant stenosis was defined as ≥90% diameter narrowing on visual estimation, or a lesion with <90% and ≥50% stenosis and FFR ≤0.75. To detect individual coronary stenosis, the respective sensitivity, specificity, and accuracy were 90%, 64%, and 78% for left anterior descending coronary artery stenosis, 78%, 84%, and 81% for left circumflex stenosis, and 83%, 47%, and 60% for right coronary artery (RCA) stenosis. The combination of prone and supine imaging had a higher specificity for RCA disease than supine imaging alone (65% vs. 47%), with an improvement in accuracy from 60% to 72%. Using thallium-201 with short acquisition time, combined with prone imaging, CZT SPECT had a high diagnostic yield in detecting significant coronary stenosis as assessed using FFR.

  14. Development and performance test of picosecond pulse x-ray excited streak camera system for scintillator characterization

    International Nuclear Information System (INIS)

    Yanagida, Takayuki; Fujimoto, Yutaka; Yoshikawa, Akira

    2010-01-01

    To observe time and wavelength-resolved scintillation events, picosecond pulse X-ray excited streak camera system is developed. The wavelength range spreads from vacuum ultraviolet (VUV) to near infrared region (110-900 nm) and the instrumental response function is around 80 ps. This work describes the principle of the newly developed instrument and the first performance test using BaF 2 single crystal scintillator. Core valence luminescence of BaF 2 peaking around 190 and 220 nm is clearly detected by our system, and the decay time turned out to be of 0.7 ns. These results are consistent with literature and confirm that our system properly works. (author)

  15. SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, S; Uesaka, M [The University of Tokyo, Tokyo (Japan); Nishio, T; Tsuneda, M [Hiroshima University, Hiroshima (Japan); Matsushita, K [Rikkyo University, Tokyo (Japan); Kabuki, S [Tokai University, Isehara (Japan)

    2016-06-15

    Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value of the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.

  16. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  17. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  18. Fast image acquisition and processing on a TV camera-based portal imaging system

    International Nuclear Information System (INIS)

    Baier, K.; Meyer, J.

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus trademark). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox(tm) Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox trademark interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). (orig.)

  19. The design of visualization telemetry system based on camera module of the commercial smartphone

    Science.gov (United States)

    Wang, Chao; Ye, Zhao; Wu, Bin; Yin, Huan; Cao, Qipeng; Zhu, Jun

    2017-09-01

    Satellite telemetry is the vital indicators to estimate the performance of the satellite. The telemetry data, the threshold range and the variation tendency collected during the whole operational life of the satellite, can guide and evaluate the subsequent design of the satellite in the future. The rotational parts on the satellite (e.g. solar arrays, antennas and oscillating mirrors) affect collecting the solar energy and the other functions of the satellite. Visualization telemetries (pictures, video) are captured to interpret the status of the satellite qualitatively in real time as an important supplement for troubleshooting. The mature technology of commercial off-the-shelf (COTS) products have obvious advantages in terms of the design of construction, electronics, interfaces and image processing. Also considering the weight, power consumption, and cost, it can be directly used in our application or can be adopted for secondary development. In this paper, characteristic simulations of solar arrays radiation in orbit are presented, and a suitable camera module of certain commercial smartphone is adopted after the precise calculation and the product selection process. Considering the advantages of the COTS devices, which can solve both the fundamental and complicated satellite problems, this technique proposed is innovative to the project implementation in the future.

  20. Action cameras and the Roter interaction analysis system to assess veterinarian-producer interactions in a dairy setting.

    Science.gov (United States)

    Ritter, Caroline; Barkema, Herman W; Adams, Cindy L

    2018-02-24

    Herd health and production management (HH&PM) are critical aspects of production animal veterinary practice; therefore, dairy veterinarians need to effectively deliver these services. However, limited research that can inform veterinary education has been conducted to characterise these farm visits. The aim of the present study was to assess the applicability of action cameras (eg, GoPro cameras) worn by veterinarians to provide on-farm recordings, and the suitability of these recordings for comprehensive communication analyses. Seven veterinarians each recorded three dairy HH&PM visits. Recordings were analysed using the Roter interaction analysis system (RIAS), which has been used to evaluate medical conversations in human and companion animal contexts, and provided insights regarding the importance of effective clinical communication. However, the RIAS has never been used in a production animal environment. Results of this pilot study indicate that on-farm recordings were suitable for RIAS coding. Dairy practitioners use a substantial amount of talk allocated to relationship-building and farmer education but that communication patterns of the same veterinarian vary considerably between farm visits. Consecutive studies using this method will provide observational data for research purposes and promise to aid in the improvement of veterinary education through identification of communication priorities and gaps in dairy advisory discussions. © British Veterinary Association (unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. A fast framing camera system for observation of acceleration and ablation of cryogenic hydrogen pellet in ASDEX Upgrade plasmas

    International Nuclear Information System (INIS)

    Kocsis, G.; Kalvin, S.; Veres, G.; Cierpka, P.; Lang, P.T.; Neuhauser, J.; Wittman, C.; ASDEX Upgrade Team

    2004-01-01

    An observation system using fast digital cameras was developed to measure a cryogenic hydrogen pellet's cloud structure, trajectory, and velocity changes during its ablation in ASDEX Upgrade plasmas. In this article the system, the applied numerical methods, and the results are presented. The three-dimensional pellet trajectory and velocity components were reconstructed from images of observations from two different directions. Pellet acceleration both in the radial and toroidal directions was detected. The pellet cloud distribution was measured with high spatio-temporal resolution. The cloud surrounding the pellet was found to be elongated along the magnetic field lines. Its typical size is 5-7 cm along the field lines and 2 cm in the perpendicular directions. A cloud extension in the poloidal direction was also observed which may be related to the drift of the detached part of the cloud

  2. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  3. Spatial resolution limit study of a CCD camera and scintillator based neutron imaging system according to MTF determination and analysis

    International Nuclear Information System (INIS)

    Kharfi, F.; Denden, O.; Bourenane, A.; Bitam, T.; Ali, A.

    2012-01-01

    Spatial resolution limit is a very important parameter of an imaging system that should be taken into consideration before examination of any object. The objectives of this work are the determination of a neutron imaging system's response in terms of spatial resolution. The proposed procedure is based on establishment of the Modulation Transfer Function (MTF). The imaging system being studied is based on a high sensitivity CCD neutron camera (2×10 −5 lx at f1.4). The neutron beam used is from the horizontal beam port (H.6) of the Algerian Es-Salam research reactor. Our contribution is on the MTF determination by proposing an accurate edge identification method and a line spread function undersampling problem-resolving procedure. These methods and procedure are integrated into a MatLab code. The methods, procedures and approaches proposed in this work are available for any other neutron imaging system and allow for judging the ability of a neutron imaging system to produce spatial (internal details) properties of any object under examination. - Highlights: ► Determination of spatial response of a neutron imaging system. ► Ability of a neutron imaging system to reproduce spatial properties of any object. ► Spatial resolution limits measurement using MTF with the slanted edge method. ► Accurate edge identification and line spread function sampling improvement. ► Development of a MatLab code to compute automatically the MTF.

  4. EDUCATING THE PEOPLE AS A DIGITAL PHOTOGRAPHER AND CAMERA OPERATOR VIA OPEN EDUCATION SYSTEM STUDIES FROM TURKEY: Anadolu University Open Education Faculty Case

    Directory of Open Access Journals (Sweden)

    Huseyin ERYILMAZ

    2010-04-01

    Full Text Available Today, Photography and visual arts are very important in our modern life. Especially for the mass communication, the visual images and visual arts have very big importance. In modern societies, people must have knowledge about the visual things, such as photographs, cartoons, drawings, typography, etc. Briefly, the people need education on visual literacy.In today’s world, most of the people in the world have a digital camera for photography or video image. But it is not possible to give people, visual literacy education in classic school system. But the camera users need a teaching medium for using their cameras effectively. So they are trying to use internet opportunities, some internet websites and pages as an information source. But as the well known problem, not all the websites give the correct learning or know-how on internet. There are a lot of mistakes and false information. Because of the reasons given above, Anadolu University Open Education Faculty is starting a new education system to educate people as a digital photographer and camera person in 2009. This program has very importance as a case study. The language of photography and digital technology is in English. Of course, not all the camera users understand English language. So, owing to this program, most of the camera users and especially people who is working as an operator in studios will learn a lot of things on photography, digital technology and camera systems. On the other hand, these people will learn about composition, visual image's history etc. Because of these reasons, this program is very important especially for developing countries. This paper will try to discuss this subject.

  5. The simulated spectrum of the OGRE X-ray EM-CCD camera system

    Science.gov (United States)

    Lewis, M.; Soman, M.; Holland, A.; Lumb, D.; Tutt, J.; McEntaffer, R.; Schultz, T.; Holland, K.

    2017-12-01

    The X-ray astronomical telescopes in use today, such as Chandra and XMM-Newton, use X-ray grating spectrometers to probe the high energy physics of the Universe. These instruments typically use reflective optics for focussing onto gratings that disperse incident X-rays across a detector, often a Charge-Coupled Device (CCD). The X-ray energy is determined from the position that it was detected on the CCD. Improved technology for the next generation of X-ray grating spectrometers has been developed and will be tested on a sounding rocket experiment known as the Off-plane Grating Rocket Experiment (OGRE). OGRE aims to capture the highest resolution soft X-ray spectrum of Capella, a well-known astronomical X-ray source, during an observation period lasting between 3 and 6 minutes whilst proving the performance and suitability of three key components. These three components consist of a telescope made from silicon mirrors, gold coated silicon X-ray diffraction gratings and a camera that comprises of four Electron-Multiplying (EM)-CCDs that will be arranged to observe the soft X-rays dispersed by the gratings. EM-CCDs have an architecture similar to standard CCDs, with the addition of an EM gain register where the electron signal is amplified so that the effective signal-to-noise ratio of the imager is improved. The devices also have incredibly favourable Quantum Efficiency values for detecting soft X-ray photons. On OGRE, this improved detector performance allows for easier identification of low energy X-rays and fast readouts due to the amplified signal charge making readout noise almost negligible. A simulation that applies the OGRE instrument performance to the Capella soft X-ray spectrum has been developed that allows the distribution of X-rays onto the EM-CCDs to be predicted. A proposed optical model is also discussed which would enable the missions minimum success criteria's photon count requirement to have a high chance of being met with the shortest possible

  6. Development of the focal plane PNCCD camera system for the X-ray space telescope eROSITA

    International Nuclear Information System (INIS)

    Meidinger, Norbert; Andritschke, Robert; Ebermayer, Stefanie; Elbs, Johannes; Haelker, Olaf; Hartmann, Robert; Herrmann, Sven; Kimmel, Nils; Schaechner, Gabriele; Schopper, Florian; Soltau, Heike; Strueder, Lothar; Weidenspointner, Georg

    2010-01-01

    A so-called PNCCD, a special type of CCD, was developed twenty years ago as focal plane detector for the XMM-Newton X-ray astronomy mission of the European Space Agency ESA. Based on this detector concept and taking into account the experience of almost ten years of operation in space, a new X-ray CCD type was designed by the 'MPI semiconductor laboratory' for an upcoming X-ray space telescope, called eROSITA (extended Roentgen survey with an imaging telescope array). This space telescope will be equipped with seven X-ray mirror systems of Wolter-I type and seven CCD cameras, placed in their foci. The instrumentation permits the exploration of the X-ray universe in the energy band from 0.3 up to 10 keV by spectroscopic measurements with a time resolution of 50 ms for a full image comprising 384x384 pixels. Main scientific goals are an all-sky survey and investigation of the mysterious 'Dark Energy'. The eROSITA space telescope, which is developed under the responsibility of the 'Max-Planck-Institute for extraterrestrial physics', is a scientific payload on the new Russian satellite 'Spectrum-Roentgen-Gamma' (SRG). The mission is already approved by the responsible Russian and German space agencies. After launch in 2012 the destination of the satellite is Lagrange point L2. The planned observational program takes about seven years. We describe the design of the eROSITA camera system and present important test results achieved recently with the eROSITA prototype PNCCD detector. This includes a comparison of the eROSITA detector with the XMM-Newton detector.

  7. Design and test of optoelectronic system of alignment control based on CCD camera

    Science.gov (United States)

    Anisimov, A. G.; Gorbachyov, A. A.; Krasnyashchikh, A. V.; Pantushin, A. N.; Timofeev, A. N.

    2008-10-01

    In this work, design, implementation and test of a system intended for positioning of the elements of turbine units relative to the line of shaft with high precision, are discussed. A procedure of the conversion of coordinates from the instrument system into the system connected with the practical position of the axis of turbine has been devised. It is shown that optoelectronic systems of aligment built by autoreflexive scheme can be used for high precision measurements.

  8. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  9. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  10. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  11. Radiodiagnostic system combining scintillation GKS-2 γ camera and SAORI-01 computerized unit

    International Nuclear Information System (INIS)

    Kalashnikov, S.D.; Mishchenko, S.V.; Chuprov, P.V.

    1986-01-01

    A medical radiodiagnostic system prising the GKS-2 scintillation gamma chamber and the on-line data processing system is described. The gamma chamber consists of a detector, a two-channel control console, a microprocessor system for correction of distortions,system of photographic recording of images from an oscilloscopic display, a digital display-monitor and 6 accessiory collimators with trucks. GKS-2 has the increased spatial resolution, fast response and image homogeneity. Application of GKS-2 together with the SAORI-01 specialized on-line system,comprising the measuring computerized system, the colour-graphic display controller the polarizer photographic recording device for scanning image from the display screen and a system of base software, expands substantially diagnostic possibilities of equipment

  12. A Design of Real-time Automatic Focusing System for Digital Still Camera Using the Passive Sensor Error Minimization

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.S. [Samsung Techwin Co., Ltd., Seoul (Korea); Kim, D.Y. [Bucheon College, Bucheon (Korea); Kim, S.H. [University of Seoul, Seoul (Korea)

    2002-05-01

    In this paper, the implementation of a new AF(Automatic Focusing) system for a digital still camera is introduced. The proposed system operates in real-time while adjusting focus after the measurement of distance to an object using a passive sensor, which is different from a typical method. In addition, measurement errors were minimized by using the data acquired empirically, and the optimal measuring time was obtained using EV(Exposure Value) which is calculated from CCD luminance signal. Moreover, this system adopted an auxiliary light source for focusing in absolute dark conditions, which is very hard for CCD image processing. Since this is an open-loop system adjusting focus immediately after the distance measurement, it guarantees real-time operation. The performance of this new AF system was verified by comparing the focusing value curve obtained from AF experiment with the one from the measurement by MF(Manual-Focusing). In both case, edge detector was used for various objects and backgrounds. (author). 9 refs., 11 figs., 5 tabs.

  13. An Intelligent Automated Door Control System Based on a Smart Camera

    Directory of Open Access Journals (Sweden)

    Jiann-Jone Chen

    2013-05-01

    Full Text Available This paper presents an innovative access control system, based on human detection and path analysis, to reduce false automatic door system actions while increasing the added values for security applications. The proposed system can first identify a person from the scene, and track his trajectory to predict his intention for accessing the entrance, and finally activate the door accordingly. The experimental results show that the proposed system has the advantages of high precision, safety, reliability, and can be responsive to demands, while preserving the benefits of being low cost and high added value.

  14. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    Science.gov (United States)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  15. Planetcam: A Visible And Near Infrared Lucky-imaging Camera To Study Planetary Atmospheres And Solar System Objects

    Science.gov (United States)

    Sanchez-Lavega, Agustin; Rojas, J.; Hueso, R.; Perez-Hoyos, S.; de Bilbao, L.; Murga, G.; Ariño, J.; Mendikoa, I.

    2012-10-01

    PlanetCam is a two-channel fast-acquisition and low-noise camera designed for a multispectral study of the atmospheres of the planets (Venus, Mars, Jupiter, Saturn, Uranus and Neptune) and the satellite Titan at high temporal and spatial resolutions simultaneously invisible (0.4-1 μm) and NIR (1-2.5 μm) channels. This is accomplished by means of a dichroic beam splitter that separates both beams directing them into two different detectors. Each detector has filter wheels corresponding to the characteristic absorption bands of each planetary atmosphere. Images are acquired and processed using the “lucky imaging” technique in which several thousand images of the same object are obtained in a short time interval, coregistered and ordered in terms of image quality to reconstruct a high-resolution ideally diffraction limited image of the object. Those images will be also calibrated in terms of intensity and absolute reflectivity. The camera will be tested at the 50.2 cm telescope of the Aula EspaZio Gela (Bilbao) and then commissioned at the 1.05 m at Pic-duMidi Observatory (Franca) and at the 1.23 m telescope at Calar Alto Observatory in Spain. Among the initially planned research targets are: (1) The vertical structure of the clouds and hazes in the planets and their scales of variability; (2) The meteorology, dynamics and global winds and their scales of variability in the planets. PlanetCam is also expected to perform studies of other Solar System and astrophysical objects. Acknowledgments: This work was supported by the Spanish MICIIN project AYA2009-10701 with FEDER funds, by Grupos Gobierno Vasco IT-464-07 and by Universidad País Vasco UPV/EHU through program UFI11/55.

  16. In vivo imaging of cerebral hemodynamics and tissue scattering in rat brain using a surgical microscope camera system

    Science.gov (United States)

    Nishidate, Izumi; Kanie, Takuya; Mustari, Afrina; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu; Kokubo, Yasuaki

    2018-02-01

    We investigated a rapid imaging method to monitor the spatial distribution of total hemoglobin concentration (CHbT), the tissue oxygen saturation (StO2), and the scattering power b in the expression of musp=a(lambda)^-b as the scattering parameters in cerebral cortex using a digital red-green-blue camera. In the method, Monte Carlo simulation (MCS) for light transport in brain tissue is used to specify a relation among the RGB-values and the concentration of oxygenated hemoglobin (CHbO), that of deoxygenated hemoglobin (CHbR), and the scattering power b. In the present study, we performed sequential recordings of RGB images of in vivo exposed brain of rats while changing the fraction of inspired oxygen (FiO2), using a surgical microscope camera system. The time courses of CHbO, CHbR, CHbT, and StO2 indicated the well-known physiological responses in cerebral cortex. On the other hand, a fast decrease in the scattering power b was observed immediately after the respiratory arrest, which is similar to the negative deflection of the extracellular DC potential so-called anoxic depolarization. It is said that the DC shift coincident with a rise in extracellular potassium and can evoke cell deformation generated by water movement between intracellular and extracellular compartments, and hence the light scattering by tissue. Therefore, the decrease in the scattering power b after the respiratory arrest is indicative of changes in light scattering by tissue. The results in this study indicate potential of the method to evaluate the pathophysiological conditions and loss of tissue viability in brain tissue.

  17. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  18. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  19. Optimum power of radiation dose in X ray television systems of flaw inspection in industry

    International Nuclear Information System (INIS)

    Denbnovetskii, S.V.; Troitskii, V.A.; Belyi, N.G.; Grom, V.S.; Kuz'micheva, N.V.; Leshchishin, A.V.; Mikhailov, V.N.; Shutenko, O.V.

    1990-01-01

    The authors present the experimental dose characteristics of a x ray television system based on x ray vidicons with the diameter of the working field of 900 mm which operate in the continuous and pulsed conditions with the longer time of cumulation of radiation images on the target of the x ray vidicon. For each type of the inspected material, its thickness, and cumulation time, the dose characteristics were used to determine the optimum power of the exposure dose ensuring the maximum signal/noise ratio and detectability of the defects at the output of the system. (author)

  20. Comparison of Near-Infrared Imaging Camera Systems for Intracranial Tumor Detection.

    Science.gov (United States)

    Cho, Steve S; Zeh, Ryan; Pierce, John T; Salinas, Ryan; Singhal, Sunil; Lee, John Y K

    2018-04-01

    Distinguishing neoplasm from normal brain parenchyma intraoperatively is critical for the neurosurgeon. 5-Aminolevulinic acid (5-ALA) has been shown to improve gross total resection and progression-free survival but has limited availability in the USA. Near-infrared (NIR) fluorescence has advantages over visible light fluorescence with greater tissue penetration and reduced background fluorescence. In order to prepare for the increasing number of NIR fluorophores that may be used in molecular imaging trials, we chose to compare a state-of-the-art, neurosurgical microscope (System 1) to one of the commercially available NIR visualization platforms (System 2). Serial dilutions of indocyanine green (ICG) were imaged with both systems in the same environment. Each system's sensitivity and dynamic range for NIR fluorescence were documented and analyzed. In addition, brain tumors from six patients were imaged with both systems and analyzed. In vitro, System 2 demonstrated greater ICG sensitivity and detection range (System 1 1.5-251 μg/l versus System 2 0.99-503 μg/l). Similarly, in vivo, System 2 demonstrated signal-to-background ratio (SBR) of 2.6 ± 0.63 before dura opening, 5.0 ± 1.7 after dura opening, and 6.1 ± 1.9 after tumor exposure. In contrast, System 1 could not easily detect ICG fluorescence prior to dura opening with SBR of 1.2 ± 0.15. After the dura was reflected, SBR increased to 1.4 ± 0.19 and upon exposure of the tumor SBR increased to 1.8 ± 0.26. Dedicated NIR imaging platforms can outperform conventional microscopes in intraoperative NIR detection. Future microscopes with improved NIR detection capabilities could enhance the use of NIR fluorescence to detect neoplasm and improve patient outcome.

  1. SU-E-T-68: A Quality Assurance System with a Web Camera for High Dose Rate Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, Y; Hirose, A; Oohira, S; Isono, M; Tsujii, K; Miyazaki, M; Kawaguchi, Y; Konishi, K; Teshima, T [Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka-shi, Osaka (Japan)

    2015-06-15

    Purpose: The purpose of this work was to develop a quality assurance (QA) system for high dose rate (HDR) brachytherapy to verify the absolute position of an 192Ir source in real time and to measure dwell time and position of the source simultaneously with a movie recorded by a web camera. Methods: A web camera was fixed 15 cm above a source position check ruler to monitor and record 30 samples of the source position per second over a range of 8.0 cm, from 1425 mm to 1505 mm. Each frame had a matrix size of 480×640 in the movie. The source position was automatically quantified from the movie using in-house software (built with LabVIEW) that applied a template-matching technique. The source edge detected by the software on each frame was corrected to reduce position errors induced by incident light from an oblique direction. The dwell time was calculated by differential processing to displacement of the source. The performance of this QA system was illustrated by recording simple plans and comparing the measured dwell positions and time with the planned parameters. Results: This QA system allowed verification of the absolute position of the source in real time. The mean difference between automatic and manual detection of the source edge was 0.04 ± 0.04 mm. Absolute position error can be determined within an accuracy of 1.0 mm at dwell points of 1430, 1440, 1450, 1460, 1470, 1480, 1490, and 1500 mm, in three step sizes and dwell time errors, with an accuracy of 0.1% in more than 10.0 sec of planned time. The mean step size error was 0.1 ± 0.1 mm for a step size of 10.0 mm. Conclusion: This QA system provides quick verifications of the dwell position and time, with high accuracy, for HDR brachytherapy. This work was supported by the Japan Society for the Promotion of Science Core-to-Core program (No. 23003)

  2. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Toan Minh Hoang

    2017-10-01

    Full Text Available Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road, weather conditions, and illumination (shadows from objects such as cars, trees, and buildings. Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD, and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  3. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.

    Science.gov (United States)

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-10-28

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  4. Real-time camera-based face detection using a modified LAMSTAR neural network system

    Science.gov (United States)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  5. How to Generate Security Cameras: Towards Defence Generation for Socio-Technical Systems

    NARCIS (Netherlands)

    Gadyatskaya, Olga

    2016-01-01

    Recently security researchers have started to look into automated generation of attack trees from socio-technical system models. The obvious next step in this trend of automated risk analysis is automating the selection of security controls to treat the detected threats. However, the existing

  6. Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy

    Science.gov (United States)

    Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, legal representative, Alicia (Inventor); Gursel, Yekta (Inventor)

    2012-01-01

    An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.

  7. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  8. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  9. Development of an Optical Fiber-Based MR Compatible Gamma Camera for SPECT/MRI Systems

    Science.gov (United States)

    Yamamoto, Seiichi; Watabe, Tadashi; Kanai, Yasukazu; Watabe, Hiroshi; Hatazawa, Jun

    2015-02-01

    Optical fiber is a promising material for integrated positron emission tomography (PET) and magnetic resonance imaging (MRI) PET/MRI systems. Because its material is plastic, it has no interference between MRI. However, it is unclear whether this material can also be used for a single photon emission tomography (SPECT)/MRI system. For this purpose, we developed an optical fiber-based block detector for a SPECT/MRI system and tested its performance by combining 1.2 ×1.2 ×6 mm Y2SiO5 (YSO) pixels into a 15 ×15 block and was coupled it to an optical fiber image guide that used was 0.5-mm in diameter with 80-cm long double clad fibers. The image guide had 22 ×22 mm rectangular input and an equal size output. The input of the optical fiber-based image guide was bent at 90 degrees, and the output was optically coupled to a 1-in square high quantum efficiency position sensitive photomultiplier tube (HQE-PSPMT). The parallel hole, 7-mm-thick collimator made of tungsten plastic was mounted on a YSO block. The diameter of the collimator holes was 0.8 mm which was positioned one-to-one coupled to the YSO pixels. We evaluated the intrinsic and system performances. We resolved most of the YSO pixels in a two-dimensional histogram for Co-57 gamma photons (122-keV) with an average peak-to-value ratio of 1.5. The energy resolution was 38% full-width at half-maximum (FWHM). The system resolution was 1.7-mm FWHM, 1.5 mm from the collimator surface, and the sensitivity was 0.06%. Images of a Co-57 point source could be successfully obtained inside 0.3 T MRI without serious interference. We conclude that the developed optical fiber-based YSO block detector is promising for SPECT/MRI systems.

  10. Feasibility of integrating a multi-camera optical tracking system in intra-operative electron radiation therapy scenarios

    International Nuclear Information System (INIS)

    García-Vázquez, V; Marinetto, E; Santos-Miranda, J A; Calvo, F A; Desco, M; Pascau, J

    2013-01-01

    Intra-operative electron radiation therapy (IOERT) combines surgery and ionizing radiation applied directly to an exposed unresected tumour mass or to a post-resection tumour bed. The radiation is collimated and conducted by a specific applicator docked to the linear accelerator. The dose distribution in tissues to be irradiated and in organs at risk can be planned through a pre-operative computed tomography (CT) study. However, surgical retraction of structures and resection of a tumour affecting normal tissues significantly modify the patient's geometry. Therefore, the treatment parameters (applicator dimension, pose (position and orientation), bevel angle, and beam energy) may require the original IOERT treatment plan to be modified depending on the actual surgical scenario. We propose the use of a multi-camera optical tracking system to reliably record the actual pose of the IOERT applicator in relation to the patient's anatomy in an environment prone to occlusion problems. This information can be integrated in the radio-surgical treatment planning system in order to generate a real-time accurate description of the IOERT scenario. We assessed the accuracy of the applicator pose by performing a phantom-based study that resembled three real clinical IOERT scenarios. The error obtained (2 mm) was below the acceptance threshold for external radiotherapy practice, thus encouraging future implementation of this approach in real clinical IOERT scenarios. (paper)

  11. Ladder beam and camera video recording system for evaluating forelimb and hindlimb deficits after sensorimotor cortex injury in rats.

    Science.gov (United States)

    Soblosky, J S; Colgin, L L; Chorney-Lane, D; Davidson, J F; Carey, M E

    1997-12-30

    Hindlimb and forelimb deficits in rats caused by sensorimotor cortex lesions are frequently tested by using the narrow flat beam (hindlimb), the narrow pegged beam (hindlimb and forelimb) or the grid-walking (forelimb) tests. Although these are excellent tests, the narrow flat beam generates non-parametric data so that using more powerful parametric statistical analyses are prohibited. All these tests can be difficult to score if the rat is moving rapidly. Foot misplacements, especially on the grid-walking test, are indicative of an ongoing deficit, but have not been reliably and accurately described and quantified previously. In this paper we present an easy to construct and use horizontal ladder-beam with a camera system on rails which can be used to evaluate both hindlimb and forelimb deficits in a single test. By slow motion videotape playback we were able to quantify and demonstrate foot misplacements which go beyond the recovery period usually seen using more conventional measures (i.e. footslips and footfaults). This convenient system provides a rapid and reliable method for recording and evaluating rat performance on any type of beam and may be useful for measuring sensorimotor recovery following brain injury.

  12. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  13. Comparison of a three-dimensional and two-dimensional camera system for automated measurement of back posture in dairy cows

    NARCIS (Netherlands)

    Viazzi, S.; Bahr, C.; Hertem, van T.; Schlageter-Tello, A.; Romanini, C.E.B.; Halachmi, I.; Lokhorst, C.; Berckmans, D.

    2014-01-01

    In this study, two different computer vision techniques to automatically measure the back posture in dairy cows were tested and evaluated. A two-dimensional and a three-dimensional camera system were used to extract the back posture from walking cows, which is one measurement used by experts to

  14. Design of an Active Multispectral SWIR Camera System for Skin Detection and Face Verification

    Directory of Open Access Journals (Sweden)

    Holger Steiner

    2016-01-01

    Full Text Available Biometric face recognition is becoming more frequently used in different application scenarios. However, spoofing attacks with facial disguises are still a serious problem for state of the art face recognition algorithms. This work proposes an approach to face verification based on spectral signatures of material surfaces in the short wave infrared (SWIR range. They allow distinguishing authentic human skin reliably from other materials, independent of the skin type. We present the design of an active SWIR imaging system that acquires four-band multispectral image stacks in real-time. The system uses pulsed small band illumination, which allows for fast image acquisition and high spectral resolution and renders it widely independent of ambient light. After extracting the spectral signatures from the acquired images, detected faces can be verified or rejected by classifying the material as “skin” or “no-skin.” The approach is extensively evaluated with respect to both acquisition and classification performance. In addition, we present a database containing RGB and multispectral SWIR face images, as well as spectrometer measurements of a variety of subjects, which is used to evaluate our approach and will be made available to the research community by the time this work is published.

  15. [Cinematography of ocular fundus with a jointed optical system and tv or cine-camera (author's transl)].

    Science.gov (United States)

    Kampik, A; Rapp, J

    1979-02-01

    A method of Cinematography of the ocular fundus is introduced which--by connecting a camera with an indirect ophthalmoscop--allows to record the monocular picture of the fundus as produced by the ophthalmic lens.

  16. A Projector-Camera System for Ironing Support with Wrinkle Enhancement

    Directory of Open Access Journals (Sweden)

    Kimie Suzuki

    2017-08-01

    Full Text Available Ironing is one of troublesome houseworks, in which the goal of the task is to remove wrinkles caused during washing. A projector has advantages in physical world instruction over an instruction sheet, a Head Mounted Display, or a smartphone/tablet PC because of direct mapping of instructive information on the target object. In this article, we propose a method to detect wrinkles using machine-learning and a system to present detected wrinkles by enhancing the area of wrinkles through a projector. In total, 47 infrared image features are defined, from which 15 features are finally used, to classify 32 pixels squares (about 4.5 cm squares of regions of interest into one of four classes including wrinkle, flat, sagging, and tuck. A RandomForest classifier successfully identified 93.0 % of the wrinkle class. The comparison of wrinkle enhancement methods implies that presenting all ROIs on an ironing board at a time is more effective in removing wrinkles than enhancing an area around and ahead of an iron. Also, we found that making a user realize the effect of wrinkle removal is important to reduce wrinkles efficiently and showed prospective solutions for this issue.

  17. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  18. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  19. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  20. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  1. Systems approach to the design of the CCD sensors and camera electronics for the AIA and HMI instruments on solar dynamics observatory

    Science.gov (United States)

    Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.

    2017-11-01

    Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.

  2. The clinical impact of a combined gamma camera/CT imaging system on somatostatin receptor imaging of neuroendocrine tumours

    International Nuclear Information System (INIS)

    Hillel, P.G.; Beek, E.J.R. van; Taylor, C.; Lorenz, E.; Bax, N.D.S.; Prakash, V.; Tindale, W.B.

    2006-01-01

    AIM: With a combined gamma camera/CT imaging system, CT images are obtained which are inherently registered to the emission images and can be used for the attenuation correction of SPECT and for mapping the functional information from these nuclear medicine tomograms onto anatomy. The aim of this study was to evaluate the clinical impact of SPECT/CT using such a system for somatostatin receptor imaging (SRI) of neuroendocrine tumours. MATERIALS AND METHODS: SPECT/CT imaging with 111 In-Pentetreotide was performed on 29 consecutive patients, the majority of whom had carcinoid disease. All SPECT images were first reported in isolation and then re-reported with the addition of the CT images for functional anatomical mapping (FAM). RESULTS: Fifteen of the 29 SPECT images were reported as abnormal, and in 11 of these abnormal images (73%) FAM was found to either establish a previously unknown location (7/11) or change the location (4/11) of at least one lesion. The revised location could be independently confirmed in 64% of these cases. Confirmation of location was not possible in the other patients due to either a lack of other relevant investigations, or the fact that lesions seen in the SPECT images were not apparent in the other investigations. FAM affected patient management in 64% of the cases where the additional anatomical information caused a change in the reported location of lesions. CONCLUSION: These results imply that FAM can improve the reporting accuracy for SPECT SRI with significant impact on patient management

  3. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    Science.gov (United States)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  4. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  5. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    International Nuclear Information System (INIS)

    Barrera, E.; Ruiz, M.; Sanz, D.; Vega, J.; Castro, R.; Juárez, E.; Salvador, R.

    2014-01-01

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  6. Evaluation of dynamic range for LLNL streak cameras using high contrast pulsed and pulse podiatry on the Nova laser system

    International Nuclear Information System (INIS)

    Richards, J.B.; Weiland, T.L.; Prior, J.A.

    1990-01-01

    This paper reports on a standard LLNL streak camera that has been used to analyze high contrast pulses on the Nova laser facility. These pulses have a plateau at their leading edge (foot) with an amplitude which is approximately 1% of the maximum pulse height. Relying on other features of the pulses and on signal multiplexing, we were able to determine how accurately the foot amplitude was being represented by the camera. Results indicate that the useful single channel dynamic range of the instrument approaches 100:1

  7. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  8. THE HUBBLE WIDE FIELD CAMERA 3 TEST OF SURFACES IN THE OUTER SOLAR SYSTEM: SPECTRAL VARIATION ON KUIPER BELT OBJECTS

    International Nuclear Information System (INIS)

    Fraser, Wesley C.; Brown, Michael E.; Glass, Florian

    2015-01-01

    Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlated optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes

  9. Gigavision - A weatherproof, multibillion pixel resolution time-lapse camera system for recording and tracking phenology in every plant in a landscape

    Science.gov (United States)

    Brown, T.; Borevitz, J. O.; Zimmermann, C.

    2010-12-01

    We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually

  10. Performance Evaluations and Quality Validation System for Optical Gas Imaging Cameras That Visualize Fugitive Hydrocarbon Gas Emissions

    Science.gov (United States)

    Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...

  11. An embedded real-time red peach detection system based on an OV7670 camera, ARM Cortex-M4 processor and 3D Look-Up Tables

    OpenAIRE

    Teixidó Cairol, Mercè; Font Calafell, Davinia; Pallejà Cabrè, Tomàs; Tresánchez Ribes, Marcel; Nogués Aymamí, Miquel; Palacín Roca, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future...

  12. Antares Reference Telescope System

    International Nuclear Information System (INIS)

    Viswanathan, V.K.; Kaprelian, E.; Swann, T.; Parker, J.; Wolfe, P.; Woodfin, G.; Knight, D.

    1983-01-01

    Antares is a 24-beam, 40-TW carbon-dioxide laser-fusion system currently nearing completion at the Los Alamos National Laboratory. The 24 beams will be focused onto a tiny target (typically 300 to 1000 μm in diameter) located approximately at the center of a 7.3-m-diameter by 9.3-m-long vacuum (10 - 6 torr) chamber. The design goal is to position the targets to within 10 μm of a selected nominal position, which may be anywhere within a fixed spherical region 1 cm in diameter. The Antares Reference Telescope System is intended to help achieve this goal for alignment and viewing of the various targets used in the laser system. The Antares Reference Telescope System consists of two similar electro-optical systems positioned in a near orthogonal manner in the target chamber area of the laser. Each of these consists of four subsystems: (1) a fixed 9X optical imaging subsystem which produces an image of the target at the vidicon; (2) a reticle projection subsystem which superimposes an image of the reticle pattern at the vidicon; (3) an adjustable front-lighting subsystem which illuminates the target; and (4) an adjustable back-lighting subsystem which also can be used to illuminate the target. The various optical, mechanical, and vidicon design considerations and trade-offs are discussed. The final system chosen (which is being built) and its current status are described in detail

  13. Waste reduction efforts through the evaluation and procurement of a digital camera system for the Alpha-Gamma Hot Cell Facility at Argonne National Laboratory-East

    International Nuclear Information System (INIS)

    Bray, T. S.; Cohen, A. B.; Tsai, H.; Kettman, W. C.; Trychta, K.

    1999-01-01

    The Alpha-Gamma Hot Cell Facility (AGHCF) at Argonne National Laboratory-East is a research facility where sample examinations involve traditional photography. The AGHCF documents samples with photographs (both Polaroid self-developing and negative film). Wastes generated include developing chemicals. The AGHCF evaluated, procured, and installed a digital camera system for the Leitz metallograph to significantly reduce labor, supplies, and wastes associated with traditional photography with a return on investment of less than two years

  14. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  15. A MODIFIED PROJECTIVE TRANSFORMATION SCHEME FOR MOSAICKING MULTI-CAMERA IMAGING SYSTEM EQUIPPED ON A LARGE PAYLOAD FIXED-WING UAS

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2015-03-01

    Full Text Available In recent years, Unmanned Aerial System (UAS has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the

  16. a Modified Projective Transformation Scheme for Mosaicking Multi-Camera Imaging System Equipped on a Large Payload Fixed-Wing Uas

    Science.gov (United States)

    Jhan, J. P.; Li, Y. T.; Rau, J. Y.

    2015-03-01

    In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme

  17. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  18. Time- and wavelength-resolved luminescence evaluation of several types of scintillators using streak camera system equipped with pulsed X-ray source

    Energy Technology Data Exchange (ETDEWEB)

    Furuya, Yuki, E-mail: f.yuki@mail.tagen.tohoku.ac.j [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Yanagida, Takayuki; Fujimoto, Yutaka; Yokota, Yuui; Kamada, Kei [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Kawaguchi, Noriaki [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Research and Development Division, Tokuyama., Co. Ltd., ICR-Building, Minamiyoshinari, Aoba-ku, Sendai (Japan); Ishizu, Sumito [Research and Development Division, Tokuyama., Co. Ltd., ICR-Building, Minamiyoshinari, Aoba-ku, Sendai (Japan); Uchiyama, Koro; Mori, Kuniyoshi [Hamamatsu Photonics K.K., 325-6, Sunayama-cho, Naka-ku, Hamamatsu, Shizuoka 430-8587 (Japan); Kitano, Ken [Vacuum and Optical Instruments, 2-18-18 Shimomaruko, Ota, Tokyo 146-0092 (Japan); Nikl, Martin [Institute of Physics ASCR, Cukrovarnicka 10, Prague 6, 162-53 (Czech Republic); Yoshikawa, Akira [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); NICHe, Tohoku University, 6-6-10 Aoba, Aramaki, Aoba-ku, Sendai 980-8579 (Japan)

    2011-04-01

    To design new scintillating materials, it is very important to understand detailed information about the events, which occurred during the excitation and emission processes under the ionizing radiation excitation. We developed a streak camera system equipped with picosecond pulsed X-ray source to observe time- and wavelength-resolved scintillation events. In this report, we test the performance of this new system using several types of scintillators including bulk oxide/halide crystals, transparent ceramics, plastics and powders. For all samples, the results were consistent with those reported previously. The results demonstrated that the developed system is suitable for evaluation of the scintillation properties.

  19. Comparison of central corneal thickness and endothelial cell measurements by Scheimpflug camera system and two noncontact specular microscopes.

    Science.gov (United States)

    Karaca, Irmak; Yilmaz, Suzan Guven; Palamar, Melis; Ates, Halil

    2017-07-03

    To investigate the correlation of Scheimpflug camera system and two noncontact specular microscopes in terms of central corneal thickness (CCT) and corneal endothelial cell morphology measurements. One hundred eyes of 50 healthy subjects were examined by Pentacam Scheimpflug Analyzer, CEM-530 (Nidek Co, Ltd, Gamagori, Japan) and CellChek XL (Konan Medical, California, USA) via fully automated image analysis with no corrections made. Measurement differences and agreement between instruments were determined by intraclass correlation analysis. The mean age of the subjects was 36.74 ± 8.59 (range 22-57). CCTs were well correlated among all devices, with having CEM-530 the thinnest and CellChek XL the thickest measurements (intraclass correlation coefficient (ICC) = 0.83; p < 0.001 and ICC = 0.78; p < 0.001, respectively). Mean endothelial cell density (ECD) given by CEM-530 was lower than CellChek XL (2613.17 ± 228.62 and 2862.72 ± 170.42 cells/mm 2 , respectively; ICC = 0.43; p < 0.001). Mean value for coefficient of variation (CV) was 28.57 ± 3.61 in CEM-530 and 30.30 ± 3.53 in CellChek XL. Cell hexagonality (HEX) with CEM-530 was higher than with CellChek XL (68.70 ± 4.16% and 45.19 ± 6.58%, respectively). ECDs with CellChek XL and CEM-530 have good correlation, but the values obtained by CellChek XL are higher than CEM-530. Measurements for HEX and CV differ significantly and show weak correlation. Thus, we do not recommend interchangeable use of CellChek XL and CEM-530. In terms of CCTs, Pentacam, CEM-530 and CellChek XL specular microscopy instruments are reliable devices.

  20. Development of a safe ultraviolet camera system to enhance awareness by showing effects of UV radiation and UV protection of the skin (Conference Presentation)

    Science.gov (United States)

    Verdaasdonk, Rudolf M.; Wedzinga, Rosaline; van Montfrans, Bibi; Stok, Mirte; Klaessens, John; van der Veen, Albert

    2016-03-01

    The significant increase of skin cancer occurring in the western world is attributed to longer sun expose during leisure time. For prevention, people should become aware of the risks of UV light exposure by showing skin damage and the protective effect of sunscreen with an UV camera. An UV awareness imaging system optimized for 365 nm (UV-A) was develop using consumer components being interactive, safe and mobile. A Sony NEX5t camera was adapted to full spectral range. In addition, UV transparent lenses and filters were selected based on spectral characteristics measured (Schott S8612 and Hoya U-340 filters) to obtain the highest contrast for e.g. melanin spots and wrinkles on the skin. For uniform UV illumination, 2 facial tanner units were adapted with UV 365 nm black light fluorescent tubes. Safety of the UV illumination was determined relative to the sun and with absolute irradiance measurements at the working distance. A maximum exposure time over 15 minutes was calculate according the international safety standards. The UV camera was successfully demonstrated during the Dutch National Skin Cancer day and was well received by dermatologists and participating public. Especially, the 'black paint' effect putting sun screen on the face was dramatic and contributed to the awareness of regions on the face what are likely to be missed applying sunscreen. The UV imaging system shows to be promising for diagnostics and clinical studies in dermatology and potentially in other areas (dentistry and ophthalmology)

  1. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  2. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  3. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  4. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  5. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  6. Laparoendoscopic single site (LESS) in vivo suturing using a magnetic anchoring and guidance system (MAGS) camera in a porcine model: impact on ergonomics and workload.

    Science.gov (United States)

    Yin, Gang; Han, Woong Kyu; Faddegon, Stephen; Tan, Yung Khan; Liu, Zhuo-Wei; Olweny, Ephrem O; Scott, Daniel J; Cadeddu, Jeffrey A

    2013-01-01

    To compare the ergonomics and workload of the surgeon during single-site suturing while using the magnetic anchoring and guidance system (MAGS) camera vs a conventional laparoscope. Seven urologic surgeons were enrolled and divided into an expert group (n=2) and a novice group (n=5) according to their laparoendoscopic single-site (LESS) experience. Each surgeon performed 2 conventional LESS and 2 MAGS camera-assisted LESS vesicostomy closures in a porcine model. A Likert scale (scoring 1-5) questionnaire assessing workload, ergonomics, technical difficulty, visualization, and needle handling, as well as a validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire were used to evaluate the tasks and workloads. MAGS LESS suturing was universally favored by expert and novice surgeons compared with conventional LESS in workload (3.4 vs 4.2), ergonomics (3.4 vs 4.4), technical challenge (3.3 vs 4.3), visualization (2.4 vs 3.3), and needle handling (3.1 vs 3.9 respectively; PNASA-TLX assessments found MAGS LESS suturing significantly decreased the workload in physical demand (P=.004), temporal demand (P=.017), and effort (P=.006). External instrument clashing was significantly reduced in MAGS LESS suturing (P<.001). The total operative time of MAGS LESS suturing was comparable to that of conventional LESS (P=.89). MAGS camera technology significantly decreased surgeon workload and improved ergonomics. Nevertheless, LESS suturing and knot tying remains a challenging task that requires training, regardless of which camera is used. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  8. Simulation study of the second-generation MR-compatible SPECT system based on the inverted compound-eye gamma camera design

    Science.gov (United States)

    Lai, Xiaochun; Meng, Ling-Jian

    2018-02-01

    In this paper, we present simulation studies for the second-generation MRI compatible SPECT system, MRC-SPECT-II, based on an inverted compound eye (ICE) gamma camera concept. The MRC-SPECT-II system consists of a total of 1536 independent micro-pinhole-camera-elements (MCEs) distributed in a ring with an inner diameter of 6 cm. This system provides a FOV of 1 cm diameter and a peak geometrical efficiency of approximately 1.3% (the typical levels of 0.1%-0.01% found in modern pre-clinical SPECT instrumentations), while maintaining a sub-500 μm spatial resolution. Compared to the first-generation MRC-SPECT system (MRC-SPECT-I) (Cai 2014 Nucl. Instrum. Methods Phys. Res. A 734 147-51) developed in our lab, the MRC-SPECT-II system offers a similar resolution with dramatically improved sensitivity and greatly reduced physical dimension. The latter should allow the system to be placed inside most clinical and pre-clinical MRI scanners for high-performance simultaneous MRI and SPECT imaging.

  9. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  10. Interpretation of 131I hippuran renocystogram using vascular invasion segment systemic flow, and DYNA CAMERA II Picker

    International Nuclear Information System (INIS)

    Morcellet, J.L.; Baret, A.

    A quantitative approximation of flows of fluids from each kidney (renal clearances urinary flows), of the hippuran mean stay time into each kidney was proposed. These times are decomposed into cortical transit mean time and into pyelocavities mean stay time. The use of a dual isotope scintillation Dyna Camera II Picker changes the collecting of the data and permits the simultaneous measurement of cardiac output which is required for their treatment. This treatment is carried out by the mean of a videotape recorder which authorizes delayed time work and by the mean of a hundred channels computer, which displays numerical data and their integration [fr

  11. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  12. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  13. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  14. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  15. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  16. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  17. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  18. The in vitro and in vivo validation of a mobile non-contact camera-based digital imaging system for tooth colour measurement.

    Science.gov (United States)

    Smith, Richard N; Collins, Luisa Z; Naeeni, Mojgan; Joiner, Andrew; Philpotts, Carole J; Hopkinson, Ian; Jones, Clare; Lath, Darren L; Coxon, Thomas; Hibbard, James; Brook, Alan H

    2008-01-01

    To assess the reproducibility of a mobile non-contact camera-based digital imaging system (DIS) for measuring tooth colour under in vitro and in vivo conditions. One in vitro and two in vivo studies were performed using a mobile non-contact camera-based digital imaging system. In vitro study: two operators used the DIS to image 10 dry tooth specimens in a randomised order on three occasions. In vivo study 1:25 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS on four consecutive days by one operator to measure day-to-day variability. On one of the four test days, duplicate images were collected by three different operators to measure inter- and intra-operator variability. In vivo study 2:11 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS twice daily over three days within the same week to assess day-to-day variability. Three operators collected images from subjects in a randomised order to measure inter- and intra-operator variability. Subject-to-subject variability was the largest source of variation within the data. Pairwise correlations and concordance coefficients were > 0.7 for each operator, demonstrating good precision and excellent operator agreement in each of the studies. Intraclass correlation coefficients (ICCs) for each operator indicate that day-to-day reliability was good to excellent, where all ICC's where > 0.75 for each operator. The mobile non-contact camera-based digital imaging system was shown to be a reproducible means of measuring tooth colour in both in vitro and in vivo experiments.

  19. An automated SO2 camera system for continuous, real-time monitoring of gas emissions from Kīlauea Volcano's summit Overlook Crater

    Science.gov (United States)

    Kern, Christoph; Sutton, Jeff; Elias, Tamar; Lee, Robert Lopaka; Kamibayashi, Kevan P.; Antolik, Loren; Werner, Cynthia A.

    2015-01-01

    SO2 camera systems allow rapid two-dimensional imaging of sulfur dioxide (SO2) emitted from volcanic vents. Here, we describe the development of an SO2 camera system specifically designed for semi-permanent field installation and continuous use. The integration of innovative but largely “off-the-shelf” components allowed us to assemble a robust and highly customizable instrument capable of continuous, long-term deployment at Kīlauea Volcano's summit Overlook Crater. Recorded imagery is telemetered to the USGS Hawaiian Volcano Observatory (HVO) where a novel automatic retrieval algorithm derives SO2 column densities and emission rates in real-time. Imagery and corresponding emission rates displayed in the HVO operations center and on the internal observatory website provide HVO staff with useful information for assessing the volcano's current activity. The ever-growing archive of continuous imagery and high-resolution emission rates in combination with continuous data from other monitoring techniques provides insight into shallow volcanic processes occurring at the Overlook Crater. An exemplary dataset from September 2013 is discussed in which a variation in the efficiency of shallow circulation and convection, the processes that transport volatile-rich magma to the surface of the summit lava lake, appears to have caused two distinctly different phases of lake activity and degassing. This first successful deployment of an SO2 camera for continuous, real-time volcano monitoring shows how this versatile technique might soon be adapted and applied to monitor SO2 degassing at other volcanoes around the world.

  20. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    Science.gov (United States)

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  1. Direct measurement of erythrocyte deformability in diabetes mellitus with a transparent microchannel capillary model and high-speed video camera system.

    Science.gov (United States)

    Tsukada, K; Sekizuka, E; Oshio, C; Minamitani, H

    2001-05-01

    To measure erythrocyte deformability in vitro, we made transparent microchannels on a crystal substrate as a capillary model. We observed axisymmetrically deformed erythrocytes and defined a deformation index directly from individual flowing erythrocytes. By appropriate choice of channel width and erythrocyte velocity, we could observe erythrocytes deforming to a parachute-like shape similar to that occurring in capillaries. The flowing erythrocytes magnified 200-fold through microscopy were recorded with an image-intensified high-speed video camera system. The sensitivity of deformability measurement was confirmed by comparing the deformation index in healthy controls with erythrocytes whose membranes were hardened by glutaraldehyde. We confirmed that the crystal microchannel system is a valuable tool for erythrocyte deformability measurement. Microangiopathy is a characteristic complication of diabetes mellitus. A decrease in erythrocyte deformability may be part of the cause of this complication. In order to identify the difference in erythrocyte deformability between control and diabetic erythrocytes, we measured erythrocyte deformability using transparent crystal microchannels and a high-speed video camera system. The deformability of diabetic erythrocytes was indeed measurably lower than that of erythrocytes in healthy controls. This result suggests that impaired deformability in diabetic erythrocytes can cause altered viscosity and increase the shear stress on the microvessel wall. Copyright 2001 Academic Press.

  2. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    Science.gov (United States)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  3. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  4. Performance of the gamma-ray camera based on GSO(Ce) scintillator array and PSPMT with the ASIC readout system

    International Nuclear Information System (INIS)

    Ueno, Kazuki; Hattori, Kaori; Ida, Chihiro; Iwaki, Satoru; Kabuki, Shigeto; Kubo, Hidetoshi; Kurosawa, Shunsuke; Miuchi, Kentaro; Nagayoshi, Tsutomu; Nishimura, Hironobu; Orito, Reiko; Takada, Atsushi; Tanimori, Toru

    2008-01-01

    We have studied the performance of a readout system with ASIC chips for a gamma-ray camera based on a 64-channel multi-anode PSPMT (Hamamatsu flat-panel H8500) coupled to a GSO(Ce) scintillator array. The GSO array consists of 8x8 pixels of 6x6x13 mm 3 with the same pixel pitch as the anode of the H8500. This camera is intended to serve as an absorber of an electron tracking Compton gamma-ray camera that measures gamma rays up to ∼1 MeV. Because we need a readout system with low power consumption for a balloon-borne experiment, we adopted a 32-channel ASIC chip, IDEAS VA32 H DR11, which has one of the widest dynamic range among commercial chips. However, in the case of using a GSO(Ce) crystal and the H8500, the dynamic range of VA32 H DR11 is narrow, and therefore the H8500 has to be operated with a low gain of about 10 5 . If the H8500 is operated with a low gain, the camera has a narrow incident-energy dynamic range from 100 to 700 keV, and a bad energy resolution of 13.0% (FWHM) at 662 keV. We have therefore developed an attenuator board in order to operate the H8500 with the typical gain of 10 6 , which can measure up to ∼1 MeV gamma ray. The board makes the variation of the anode gain uniform and widens the dynamic range of the H8500. The system using the new attenuator board has a good uniformity of min:max∼1:1.6, an incident-energy dynamic range from 30 to 900 keV, a position resolution of less than 6 mm, and a typical energy resolution of 10.6% (FWHM) at 662 keV with a low power consumption of about 1.7 W/64ch

  5. Monitoring of oil leakage from a ship propulsion system using IR camera and wavelet analysis for prevention of health and ecology risks and engine faults

    Energy Technology Data Exchange (ETDEWEB)

    Soda, J.; Beros, S. [University of Split (Croatia). Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture; Antonic, R.; Vujovic, I. [University of Split (Croatia) Maritime Faculty; Kuzmanic, I.

    2009-03-15

    It is a well known fact that oil leakage from ship diesel engines is harmful both for the environment and the ship engine and therefore has to be observed and alarmed. The present paper proposes a system for overcoming described problems by installing a computer vision system. The used algorithm of pattern recognition system is based on the use of wavelet structures. Additionally, one of the problems in the system is the compensation of camera movements due to engine vibration. The compensation part of the computer vision solution is used to improve position determination. The position determination is improved more that 300 % when using farras wavelets. (Abstract Copyright [2009], Wiley Periodicals, Inc.) [German] (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  6. In-flight measurements of propeller blade deformation on a VUT100 cobra aeroplane using a co-rotating camera system

    Science.gov (United States)

    Boden, F.; Stasicki, B.; Szypuła, M.; Ružička, P.; Tvrdik, Z.; Ludwikowski, K.

    2016-07-01

    Knowledge of propeller or rotor blade behaviour under real operating conditions is crucial for optimizing the performance of a propeller or rotor system. A team of researchers, technicians and engineers from Avia Propeller, DLR, EVEKTOR and HARDsoft developed a rotating stereo camera system dedicated to in-flight blade deformation measurements. The whole system, co-rotating with the propeller at its full speed and hence exposed to high centrifugal forces and strong vibration, had been successfully tested on an EVEKTOR VUT 100 COBRA aeroplane in Kunovice (CZ) within the project AIM2—advanced in-flight measurement techniques funded by the European Commission (contract no. 266107). This paper will describe the work, starting from drawing the first sketch of the system up to performing the successful flight test. Apart from a description of the measurement hardware and the applied IPCT method, the paper will give some impressions of the flight test activities and discuss the results obtained from the measurements.

  7. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  8. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  9. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  10. A generic model for camera based intelligent road crowd control ...

    African Journals Online (AJOL)

    This research proposes a model for intelligent traffic flow control by implementing camera based surveillance and feedback system. A series of cameras are set minimum three signals ahead from the target junction. The complete software system is developed to help integrating the multiple camera on road as feedback to ...

  11. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  12. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  13. Can we develop an effective early warning system for volcanic eruptions using `off the shelf' webcams and low-light cameras?

    Science.gov (United States)

    Harrild, M.; Webley, P. W.; Dehn, J.

    2016-12-01

    An effective early warning system to detect volcanic activity is an invaluable tool, but often very expensive. Detecting and monitoring precursory events, thermal signatures, and ongoing eruptions in near real-time is essential, but conventional methods are often logistically challenging, expensive, and difficult to maintain. Our investigation explores the use of `off the shelf' webcams and low-light cameras, operating in the visible to near-infrared portions of the electromagnetic spectrum, to detect and monitor volcanic incandescent activity. Large databases of webcam imagery already exist at institutions around the world, but are often extremely underutilised and we aim to change this. We focus on the early detection of thermal signatures at volcanoes, using automated scripts to analyse individual images for changes in pixel brightness, allowing us to detect relative changes in thermally incandescent activity. Primarily, our work focuses on freely available streams of webcam images from around the world, which we can download and analyse in near real-time. When changes in activity are detected, an alert is sent to the users informing them of the changes in activity and a need for further investigation. Although relatively rudimentary, this technique provides constant monitoring for volcanoes in remote locations and developing nations, where it is not financially viable to deploy expensive equipment. We also purchased several of our own cameras, which were extensively tested in controlled laboratory settings with a black body source to determine their individual spectral response. Our aim is to deploy these cameras at active volcanoes knowing exactly how they will respond to varying levels of incandescence. They are ideal for field deployments as they are cheap (0-1,000), consume little power, are easily replaced, and can provide telemetered near real-time data. Data from Shiveluch volcano, Russia and our spectral response lab experiments are presented here.

  14. Development of an image converter of radical design. [employing solid state electronics towards the production of an advanced engineering model camera system

    Science.gov (United States)

    Irwin, E. L.; Farnsworth, D. L.

    1972-01-01

    A long term investigation of thin film sensors, monolithic photo-field effect transistors, and epitaxially diffused phototransistors and photodiodes to meet requirements to produce acceptable all solid state, electronically scanned imaging system, led to the production of an advanced engineering model camera which employs a 200,000 element phototransistor array (organized in a matrix of 400 rows by 500 columns) to secure resolution comparable to commercial television. The full investigation is described for the period July 1962 through July 1972, and covers the following broad topics in detail: (1) sensor monoliths; (2) fabrication technology; (3) functional theory; (4) system methodology; and (5) deployment profile. A summary of the work and conclusions are given, along with extensive schematic diagrams of the final solid state imaging system product.

  15. Development of a tomographic system adapted to 3D measurement of contaminated wounds based on the Cacao concept (Computer aided collimation Gamma Camera)

    International Nuclear Information System (INIS)

    Douiri, A.

    2002-03-01

    The computer aided collimation gamma camera (CACAO in French) is a gamma camera using a collimator with large holes, a supplementary linear scanning motion during the acquisition and a dedicated reconstruction program taking full account of the source depth. The CACAO system was introduced to improve both the sensitivity and the resolution in nuclear medicine. This thesis focuses on the design of a fast and robust reconstruction algorithm in the CACAO project. We start by an overview of tomographic imaging techniques in nuclear medicine. After modelling the physical CACAO system, we present the complete reconstruction program which involves three steps: 1) shift and sum 2) deconvolution and filtering 3) rotation and sum. The deconvolution is the critical step that decreases the signal to noise ratio of the reconstructed images. We propose a regularized multi-channel algorithm to solve the deconvolution problem. We also present a fast algorithm based on Splines functions and preserving the high quality of the reconstructed images for the shift and the rotation steps. Comparisons of simulated reconstructed images in 2D and 3D for the conventional system (CPHC) and CACAO demonstrate the ability of CACAO system to increase the quality of the SPECT images. Finally, this study concludes with an experimental approach with a pixellated detector conceived for a 3D measurement of contaminated wounds. This experimentation proves the possible advantages of coupling the CACAO project with pixellated detectors. Moreover, a variety of applications could fully benefit from the CACAO system, such as low activity imaging, the use of high-energy gamma isotopes and the visualization of deep organs. Moreover the combination of the CACAO system with a pixels detector may open up further possibilities for the future of nuclear medicine. (author)