WorldWideScience

Sample records for vidicon camera system

  1. Formation of the color image based on the vidicon TV camera

    Science.gov (United States)

    Iureva, Radda A.; Maltseva, Nadezhda K.; Dunaev, Vadim I.

    2016-09-01

    The main goal of nuclear safety is to protect from accidents in nuclear power plant (NPP) against radiation arising during normal operation of nuclear installations, or as a result of accidents on them. The most important task in any activities aimed at the maintenance of NPP is a constant maintenance of the desired level of security and reliability. The periodic non-destructive testing during operation provides the most relevant criteria for the integrity of the components of the primary circuit pressure. The objective of this study is to develop a system for forming a color image on the television camera on vidicon which is used to conduct non-destructive testing in conditions of increased radiation at NPPs.

  2. Landsat 3 return beam vidicon response artifacts

    Science.gov (United States)

    ,; Clark, B.

    1981-01-01

    The return beam vidicon (RBV) sensing systems employed aboard Landsats 1, 2, and 3 have all been similar in that they have utilized vidicon tube cameras. These are not mirror-sweep scanning devices such as the multispectral scanner (MSS) sensors that have also been carried aboard the Landsat satellites. The vidicons operate more like common television cameras, using an electron gun to read images from a photoconductive faceplate.In the case of Landsats 1 and 2, the RBV system consisted of three such vidicons which collected remote sensing data in three distinct spectral bands. Landsat 3, however, utilizes just two vidicon cameras, both of which sense data in a single broad band. The Landsat 3 RBV system additionally has a unique configuration. As arranged, the two cameras can be shuttered alternately, twice each, in the same time it takes for one MSS scene to be acquired. This shuttering sequence results in four RBV "subscenes" for every MSS scene acquired, similar to the four quadrants of a square. See Figure 1. Each subscene represents a ground area of approximately 98 by 98 km. The subscenes are designated A, B, C, and D, for the northwest, northeast, southwest, and southeast quarters of the full scene, respectively. RBV data products are normally ordered, reproduced, and sold on a subscene basis and are in general referred to in this way. Each exposure from the RBV camera system presents an image which is 98 km on a side. When these analog video data are subsequently converted to digital form, the picture element, or pixel, that results is 19 m on a side with an effective resolution element of 30 m. This pixel size is substantially smaller than that obtainable in MSS images (the MSS has an effective resolution element of 73.4 m), and, when RBV images are compared to equivalent MSS images, better resolution in the RBV data is clearly evident. It is for this reason that the RBV system can be a valuable tool for remote sensing of earth resources.Until recently

  3. Pulse Characteristic Curves of Vidicons,

    Science.gov (United States)

    microamps, and in vidicons with heterotransition screens, up to 10 microamps. The use of static modulation characteristic curves of vidicons for the...determination of the pulse beam current can lead to an error > 100%. With the help of pulse-modulation characteristic curves, it is possible to obtain the

  4. Landsat 1-2 Return Beam Vidicon Film Only: 1972-1983

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The three-camera Return Beam Vidicon (RBV) that operated on Landsat satellites 1 and 2 acquired approximately 1600 sub-scenes at 80 meter resolution. The initial RBV...

  5. Traffic camera system development

    Science.gov (United States)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  6. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  7. Closed circuit TV system monitors welding operations

    Science.gov (United States)

    Gilman, M.

    1967-01-01

    TV camera system that has a special vidicon tube with a gradient density filter is used in remote monitoring of TIG welding of stainless steel. The welding operations involve complex assembly welding tools and skates in areas of limited accessibility.

  8. Goniometer to calibrate system cameras or amateur cameras

    Science.gov (United States)

    Hakkarainen, J.

    An accurate and rapid horizontal goniometer was developed to determine the optical properties of film cameras. Radial and decentering distortion, color defects, optical resolution, and small object transmission factors are measured according to light wavelengths and symmetry. The goniometer can be used to calibrate cameras for photogrammetry, to determine the effects of remoteness on image geometry, distortion symmetry, efficiency of lens lighting film systems, to develop quality criteria for lenses, and to test camera lens and camera defects after an incident.

  9. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  10. ACCURACY EVALUATION OF STEREO CAMERA SYSTEMS WITH GENERIC CAMERA MODELS

    Directory of Open Access Journals (Sweden)

    D. Rueß

    2012-07-01

    Full Text Available In the last decades the consumer and industrial market for non-projective cameras has been growing notably. This has led to the development of camera description models other than the pinhole model and their employment in mostly homogeneous camera systems. Heterogeneous camera systems (for instance, combine Fisheye and Catadioptric cameras can also be easily thought of for real applications. However, it has not been quite clear, how accurate stereo vision with these cameras and models can be. In this paper, different accuracy aspects are addressed by analytical inspection, numerical simulation as well as real image data evaluation. This analysis is generic, for any camera projection model, although only polynomial and rational projection models are used for distortion free, Catadioptric and Fisheye lenses. Note that this is different to polynomial and rational radial distortion models which have been addressed extensively in literature. For single camera analysis it turns out that point features towards the image sensor borders are significantly more accurate than in center regions of the sensor. For heterogeneous two camera systems it turns out, that reconstruction accuracy decreases significantly towards image borders as different projective distortions occur.

  11. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  12. Development of biostereometric experiments. [stereometric camera system

    Science.gov (United States)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  13. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  14. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  15. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  16. NATIONAL GUIDELINES FOR DIGITAL CAMERA SYSTEMS CERTIFICATION

    Directory of Open Access Journals (Sweden)

    Y. Yaron

    2016-06-01

    Full Text Available Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU, active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc. of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process. The study examine all the aspects of the final product including; its accuracy, the

  17. National Guidelines for Digital Camera Systems Certification

    Science.gov (United States)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  18. Integrating TV/digital data spectrograph system

    Science.gov (United States)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  19. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  20. Operational experience with a CID camera system

    CERN Document Server

    Welsch, Carsten P; Burel, Bruno; Lefèvre, Thibaut

    2006-01-01

    In future high intensity, high energy accelerators particle losses must be minimized as activation of the vacuum chambers or other components makes maintenance and upgrade work time consuming and costly. It is imperative to have a clear understanding of the mechanisms that can lead to halo formation, and to have the possibility to test available theoretical models with an adequate experimental setup. Measurements based on optical transition radiation (OTR) provide an interesting opportunity for analyzing the transverse beam profile due to the fast time response and very good linearity of the signal with respect to the beam intensity. On the other hand, the dynamic range of typical acquisition systems as they are used in the CLIC test facility (CTF3) is typically limited and must be improved before these systems can be applied to halo measurements. One possibility for high dynamic range measurements is an innovative camera system based on charge injection device (CID) technology. With possible future measureme...

  1. Quality control of gamma camera systems

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, Alison (ed.)

    2003-07-01

    important that simple tests can be established which can be carried out in-house at the appropriate frequency. Ideally, these simple tests should be implemented as soon as possible after the acceptance testing has been completed. The results obtained will then form a baseline against which subsequent performance may be compared. Previous reports discussed the QC of a wide range of nuclear medicine instrumentation. By far the most common radionuclide imaging device is the Anger-type gamma camera, which is used to image the distribution of a gamma-ray-emitting radiopharmaceutical within a patient. Many parameters are required to specify adequately the performance of a gamma camera, and the relative importance of these depends on the application. For static imaging, a compromise between system resolution and sensitivity must be chosen for each type of investigation. For some dynamic studies, the count rate performance is particularly important, while uniformity is crucial if the gamma camera is used to perform single photon emission computed tomography (SPET). It is therefore not possible to simply specify an acceptable limit for each performance parameter in isolation from other considerations. A gamma camera is a particularly complex device and its imaging characteristics may deteriorate gradually or fail acutely. Many failures will become apparent during normal use. Such a failure may be inconvenient and require re-scheduling of patient appointments, but it is unlikely to result in misinterpretation of a clinical study. However, gradual deterioration of performance is unlikely to be evident from normal clinical use, and such deterioration may eventually result in errors in the interpretation of clinical studies. A gamma camera is normally interfaced to a dedicated nuclear medicine computer. The performance of the computer is less likely to change with time, although it may also suffer acute failures and become increasingly unreliable. The performance of the computer

  2. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  3. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Science.gov (United States)

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  4. Exploring Computation-Communication Tradeoffs in Camera Systems

    OpenAIRE

    Mazumdar, Amrita; Moreau, Thierry; Kim, Sung; Cowan, Meghan; Alaghi, Armin; Ceze, Luis; Oskin, Mark; Sathe, Visvesh

    2017-01-01

    Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that use acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their design. The firs...

  5. Coaxial visible and FIR camera system with accurate geometric calibration

    Science.gov (United States)

    Ogino, Yuka; Shibata, Takashi; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-05-01

    A far-infrared (FIR) image contains important invisible information for various applications such as night vision and fire detection, while a visible image includes colors and textures in a scene. We present a coaxial visible and FIR camera system accompanied to obtain the complementary information of both images simultaneously. The proposed camera system is composed of three parts: a visible camera, a FIR camera, and a beam-splitter made from silicon. The FIR radiation from the scene is reflected at the beam-splitter, while the visible radiation is transmitted through this beam-splitter. Even if we use this coaxial visible and FIR camera system, the alignment between the visible and FIR images are not perfect. Therefore, we also present the joint calibration method which can simultaneously estimate accurate geometric parameters of both cameras, i.e. the intrinsic parameters of both cameras and the extrinsic parameters between both cameras. In the proposed calibration method, we use a novel calibration target which has a two-layer structure where thermal emission property of each layer is different. By using the proposed calibration target, we can stably and precisely obtain the corresponding points of the checker pattern in the calibration target from both the visible and the FIR images. Widely used calibration tools can accurately estimate both camera parameters. We can obtain aligned visible and FIR images by the coaxial camera system with precise calibration using two-layer calibration target. Experimental results demonstrate that the proposed camera system is useful for various applications such as image fusion, image denoising, and image up-sampling.

  6. Vision System of Mobile Robot Combining Binocular and Depth Cameras

    Directory of Open Access Journals (Sweden)

    Yuxiang Yang

    2017-01-01

    Full Text Available In order to optimize the three-dimensional (3D reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper. The whole system consists of two identical color cameras, a TOF depth camera, an image processing host, a mobile robot control host, and a mobile robot. Because of structural constraints, the resolution of TOF depth camera is very low, which difficultly meets the requirement of trajectory planning. The resolution of binocular stereo cameras can be very high, but the effect of stereo matching is not ideal for low-texture scenes. Hence binocular stereo cameras also difficultly meet the requirements of high accuracy. In this paper, the proposed system integrates depth camera and stereo matching to improve the precision of the 3D reconstruction. Moreover, a double threads processing method is applied to improve the efficiency of the system. The experimental results show that the system can effectively improve the accuracy of 3D reconstruction, identify the distance from the camera accurately, and achieve the strategy of trajectory planning.

  7. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  8. Benchmarking the Optical Resolving Power of Uav Based Camera Systems

    Science.gov (United States)

    Meißner, H.; Cramer, M.; Piltz, B.

    2017-08-01

    UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very) highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric) calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  9. Vision System of Mobile Robot Combining Binocular and Depth Cameras

    National Research Council Canada - National Science Library

    Yuxiang Yang; Xiang Meng; Mingyu Gao

    2017-01-01

    In order to optimize the three-dimensional (3D) reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper...

  10. Charge-coupled device camera for the Galileo Jupiter Orbiter spacecraft

    Science.gov (United States)

    Klaasen, K. P.; Clary, M. C.; Janesick, J. R.

    1984-01-01

    A slow-scan television camera called the solid-state imaging subsystem (SSI), built for the Galileo Jupiter Orbiter, is described. The SSI consists of a 1500-mm focal-length telescope coupled to a camera head housing a 800 x 800-element charge-coupled device (CCD) detector based on 'virtual-phase' charge transfer technology. The CCD detector provides broadband sensitivity over 100 times that of a comparable vidicon-tube camera, while also yielding improved resolution, linearity, geometric fidelity, and spectral range. The system noise floor is 30 electrons, which results in a dynamic range of about 3500. Saturation of the detector with 9000-A light, followed by a high-speed erasure cycle prior to exposing each image, stabilizes the detector quantum efficiency at its maximum level for wavelengths beyond 7000 A. An optical schematic diagram of the SSI is included.

  11. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  12. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  13. Camera space control system for a mobile robot forklift

    Science.gov (United States)

    Miller, Richard K.; Stewart, D. G.; Brockman, W. H.; Skaar, Steven B.

    1993-05-01

    In this paper we present the method of camera space manipulation for control of a mobile cart with an on-board robot. The objective is to do three dimensional object placement. The robot- cart system is operated as a forklift. The cart has a rear wheel for steering and driving, two front wheels, and a tether allowing control from a remote computer. Two remotely placed CCTV cameras provide images for use by the control system. The method is illustrated experimentally by a box stacking task. None of the components-cameras, robot-cart, or target box are prepositioned. 'Ring cues' are placed on both boxes in order to simplify the image processing. A sequential estimation scheme solves the placement problem. This scheme produces the control necessary to place the image of the grasped box at the relevant target image position in each of the two dimensional camera planes. This results in a precise and robust manipulation strategy.

  14. A Lane Following Mobile Robot Navigation System Using Mono Camera

    Science.gov (United States)

    Cho, Yeongcheol; Kim, Seungwoo; Park, Seongkeun

    2017-02-01

    In this paper, we develop a lane following mobile robot using mono camera. By using camera, robot can recognize its left and right side lane, and maintain the center line of robot track. We use Hough Transform for detecting lane, and PID controller for control direction of mobile robot. The validity of our robot system is performed in a real world robot track environment which is built up in our laboratory.

  15. Stereo Calibration and Rectification for Omnidirectional Multi-Camera Systems

    Directory of Open Access Journals (Sweden)

    Yanchang Wang

    2012-10-01

    Full Text Available Stereo vision has been studied for decades as a fundamental problem in the field of computer vision. In recent years, computer vision and image processing with a large field of view, especially using omnidirectional vision and panoramic images, has been receiving increasing attention. An important problem for stereo vision is calibration. Although various kinds of calibration methods for omnidirectional cameras are proposed, most of them are limited to calibrate catadioptric cameras or fish-eye cameras and cannot be applied directly to multi-camera systems. In this work, we propose an easy calibration method with closed-form initialization and iterative optimization for omnidirectional multi-camera systems. The method only requires image pairs of the 2D target plane in a few different views. A method based on the spherical camera model is also proposed for rectifying omnidirectional stereo pairs. Using real data captured by Ladybug3, we carry out some experiments, including stereo calibration, rectification and 3D reconstruction. Statistical analyses and comparisons of the experimental results are also presented. As the experimental results show, the calibration results are precise and the effect of rectification is promising.

  16. ACCURACY POTENTIAL AND APPLICATIONS OF MIDAS AERIAL OBLIQUE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    M. Madani

    2012-07-01

    Full Text Available Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm and (50 mm/50 mm were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining

  17. Orbital docking system centerline color television camera system test

    Science.gov (United States)

    Mongan, Philip T.

    1993-01-01

    A series of tests was run to verify that the design of the centerline color television camera (CTVC) system is adequate optically for the STS-71 Space Shuttle Orbiter docking mission with the Mir space station. In each test, a mockup of the Mir consisting of hatch, docking mechanism, and docking target was positioned above the Johnson Space Center's full fuselage trainer, which simulated the Orbiter with a mockup of the external airlock and docking adapter. Test subjects viewed the docking target through the CTVC under 30 different lighting conditions and evaluated target resolution, field of view, light levels, light placement, and methods of target alignment. Test results indicate that the proposed design will provide adequate visibility through the centerline camera for a successful docking, even with a reasonable number of light failures. It is recommended that the flight deck crew have individual switching capability for docking lights to provide maximum shadow management and that centerline lights be retained to deal with light failures and user preferences. Procedures for light management should be developed and target alignment aids should be selected during simulated docking runs.

  18. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  19. Target-Tracking Camera for a Metrology System

    Science.gov (United States)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  20. Design of microcontroller based system for automation of streak camera.

    Science.gov (United States)

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  1. Metrology camera system of prime focus spectrograph for Subaru telescope

    Science.gov (United States)

    Wang, Shiang-Yu; Chou, Chueh-Yi; Chang, Yin-Chang; Huang, Pin-Jie; Hu, Yen-Sang; Chen, Hsin-Yo; Tamura, Naoyuki; Takato, Naruhisa; Ling, Hung-Hsu; Gunn, James E.; Karr, Jennifer; Yan, Chi-Hung; Mao, Peter; Ohyama, Youichi; Karoji, Hiroshi; Sugai, Hajime; Shimono, Atsushi

    2014-07-01

    The Prime Focus Spectrograph (PFS) is a new optical/near-infrared multi-fiber spectrograph designed for the prime focus of the 8.2m Subaru telescope. The metrology camera system of PFS serves as the optical encoder of the COBRA fiber motors for the configuring of fibers. The 380mm diameter aperture metrology camera will locate at the Cassegrain focus of Subaru telescope to cover the whole focal plane with one 50M pixel Canon CMOS sensor. The metrology camera is designed to provide the fiber position information within 5μm error over the 45cm focal plane. The positions of all fibers can be obtained within 1s after the exposure is finished. This enables the overall fiber configuration to be less than 2 minutes.

  2. Design, development and verification of the HIFI Alignment Camera System

    NARCIS (Netherlands)

    Boslooper, E.C.; Zwan, B.A. van der; Kruizinga, B.; Lansbergen, R.

    2005-01-01

    This paper presents the TNO share of the development of the HIFI Alignment Camera System (HACS), covering the opto-mechanical and thermal design. The HACS is an Optical Ground Support Equipment (OGSE) that is specifically developed to verify proper alignment of different modules of the HIFI

  3. A smart camera system for fixed facility security surveillance

    Science.gov (United States)

    Love, John; Van Dover, Doug; Law, Scott

    2007-04-01

    In response to a serious homeland security threat exemplified by chemical plants with on-site stores of dangerous substances, rendered vulnerable by their locations on public waterways, we have developed and described a viable approach to persistent optical surveillance for detecting and assessing attacking adversaries sufficiently early to permit probable interdiction by a responding guard force. Last year we outlined the technical challenges and described some of the attributes, of a "smart camera system" as a key part of the overall security solution. We described the relative strengths and weaknesses of various sensors as well as the benefits of software systems that add a degree of intelligence to the sensor systems. In this paper we describe and elaborate the actual hardware and software implementation and operating protocols of this smart camera system. The result is a modular, configurable, upgradeable, open architecture, night-and-day video system that is highly capable today and able to grow to expanded capability in the future.

  4. Dual camera system for acquisition of high resolution images

    Science.gov (United States)

    Papon, Jeremie A.; Broussard, Randy P.; Ives, Robert W.

    2007-02-01

    Video surveillance is ubiquitous in modern society, but surveillance cameras are severely limited in utility by their low resolution. With this in mind, we have developed a system that can autonomously take high resolution still frame images of moving objects. In order to do this, we combine a low resolution video camera and a high resolution still frame camera mounted on a pan/tilt mount. In order to determine what should be photographed (objects of interest), we employ a hierarchical method which first separates foreground from background using a temporal-based median filtering technique. We then use a feed-forward neural network classifier on the foreground regions to determine whether the regions contain the objects of interest. This is done over several frames, and a motion vector is deduced for the object. The pan/tilt mount then focuses the high resolution camera on the next predicted location of the object, and an image is acquired. All components are controlled through a single MATLAB graphical user interface (GUI). The final system we present will be able to detect multiple moving objects simultaneously, track them, and acquire high resolution images of them. Results will demonstrate performance tracking and imaging varying numbers of objects moving at different speeds.

  5. Mission Report on the Orbiter Camera Payload System (OCPS) Large Format Camera (LFC) and Attitude Reference System (ARS)

    Science.gov (United States)

    Mollberg, Bernard H.; Schardt, Bruton B.

    1988-01-01

    The Orbiter Camera Payload System (OCPS) is an integrated photographic system which is carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC), a precision wide-angle cartographic instrument that is capable of producing high resolution stereo photography of great geometric fidelity in multiple base-to-height (B/H) ratios. A secondary, supporting system to the LFC is the Attitude Reference System (ARS), which is a dual lens Stellar Camera Array (SCA) and camera support structure. The SCA is a 70-mm film system which is rigidly mounted to the LFC lens support structure and which, through the simultaneous acquisition of two star fields with each earth-viewing LFC frame, makes it possible to determine precisely the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with Shuttle launch conditions and the on-orbit environment. The full-up OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on October 5, 1984, as a major payload aboard mission STS 41-G. This report documents the system design, the ground testing, the flight configuration, and an analysis of the results obtained during the Challenger mission STS 41-G.

  6. System Architecture of the Dark Energy Survey Camera Readout Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Theresa; /FERMILAB; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; /Barcelona, IFAE; Chappa, Steve; /Fermilab; de Vicente, Juan; /Madrid, CIEMAT; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; /Fermilab; Martinez, Gustavo; /Madrid, CIEMAT; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  7. System Construction of the Stilbene Compact Neutron Scatter Camera

    Energy Technology Data Exchange (ETDEWEB)

    Goldsmith, John E. M. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Gerling, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brennan, James S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Throckmorton, Daniel J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Helm, Jonathan Ivers [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2016-10-01

    This report documents the construction of a stilbene-crystal-based compact neutron scatter camera. This system is essentially identical to the MINER (Mobile Imager of Neutrons for Emergency Responders) system previously built and deployed under DNN R&D funding,1 but with the liquid scintillator in the detection cells replaced by stilbene crystals. The availability of these two systems for side-by-side performance comparisons will enable us to unambiguously identify the performance enhancements provided by the stilbene crystals, which have only recently become commercially available in the large size required (3” diameter, 3” deep).

  8. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    Science.gov (United States)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  9. SFR test fixture for hemispherical and hyperhemispherical camera systems

    Science.gov (United States)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  10. Pothole Detection System Using a Black-box Camera.

    Science.gov (United States)

    Jo, Youngtae; Ryu, Seungki

    2015-11-19

    Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  11. Pothole Detection System Using a Black-box Camera

    Directory of Open Access Journals (Sweden)

    Youngtae Jo

    2015-11-01

    Full Text Available Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  12. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  13. Development of a stereo camera system for road surface assessment

    Science.gov (United States)

    Su, D.; Nagayama, T.; Irie, M.; Fujino, Y.

    2013-04-01

    In Japan, large number of road structures which were built in the period of high economic growth, has been deteriorated due to heavy traffic and severe conditions, especially in the metropolitan area. In particular, the poor condition of expansion joints of the bridge caused by the frequent impact from the passing vehicles has significantly influence the vehicle safety. In recent year, stereo vision is a widely researched and implemented monitoring approach in object recognition field. This paper introduces the development of a stereo camera system for road surface assessment. In this study, first the static photos taken by a calibrated stereo camera system are utilized to reconstruct the three-dimensional coordinates of targets in the pavement. Subsequently to align the various coordinates obtained from different view meshes, one modified Iterative Closet Point method is proposed by affording the appropriate initial conditions and image correlation method. Several field tests have been carried out to evaluate the capabilities of this system. After succeeding to align all the measured coordinates, this system can offer not only the accurate information of local deficiency such as the patching, crack or pothole, but also global fluctuation in a long distance range of the road surface.

  14. Usability of a Wearable Camera System for Dementia Family Caregivers

    Directory of Open Access Journals (Sweden)

    Judith T. Matthews

    2015-01-01

    Full Text Available Health care providers typically rely on family caregivers (CG of persons with dementia (PWD to describe difficult behaviors manifested by their underlying disease. Although invaluable, such reports may be selective or biased during brief medical encounters. Our team explored the usability of a wearable camera system with 9 caregiving dyads (CGs: 3 males, 6 females, 67.00 ± 14.95 years; PWDs: 2 males, 7 females, 80.00 ± 3.81 years, MMSE 17.33 ± 8.86 who recorded 79 salient events over a combined total of 140 hours of data capture, from 3 to 7 days of wear per CG. Prior to using the system, CGs assessed its benefits to be worth the invasion of privacy; post-wear privacy concerns did not differ significantly. CGs rated the system easy to learn to use, although cumbersome and obtrusive. Few negative reactions by PWDs were reported or evident in resulting video. Our findings suggest that CGs can and will wear a camera system to reveal their daily caregiving challenges to health care providers.

  15. Practical assessment of veiling glare in camera lens system

    OpenAIRE

    Ivana Tomić; Igor Karlović; Ivana Jurič

    2014-01-01

    Veiling glare can be defined as an unwanted or stray light in an optical system caused by internal reflections between elements of the camera lens. It leads to image fogging and degradation of both image density and contrast, diminishing its overall quality. Each lens is susceptible to veiling glare to some extent - sometimes it is negligible, but it most cases it leads to the visible defects in an image. Unlike the other flaws and errors, lens flare is not easy to correct. Hence, it is highl...

  16. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  17. Galvanometer control system design of aerial camera motion compensation

    Science.gov (United States)

    Qiao, Mingrui; Cao, Jianzhong; Wang, Huawei; Guo, Yunzeng; Hu, Changchang; Tang, Hong; Niu, Yuefeng

    2015-10-01

    Aerial cameras exist the image motion on the flight. The image motion has seriously affected the image quality, making the image edge blurred and gray scale loss. According to the actual application situation, when high quality and high precision are required, the image motion compensation (IMC) should be adopted. This paper designs galvanometer control system of IMC. The voice coil motor as the actuator has a simple structure, fast dynamic response and high positioning accuracy. Double-loop feedback is also used. PI arithmetic and Hall sensors are used at the current feedback. Fuzzy-PID arithmetic and optical encoder are used at the speed feedback. Compared to conventional PID control arithmetic, the simulation results show that the control system has fast response and high control accuracy.

  18. Design of optical system for binocular fundus camera.

    Science.gov (United States)

    Wu, Jun; Lou, Shiliang; Xiao, Zhitao; Geng, Lei; Zhang, Fang; Wang, Wen; Liu, Mengjia

    2017-12-01

    A non-mydriasis optical system for binocular fundus camera has been designed in this paper. It can capture two images of the same fundus retinal region from different angles at the same time, and can be used to achieve three-dimensional reconstruction of fundus. It is composed of imaging system and illumination system. In imaging system, Gullstrand Le Grand eye model is used to simulate normal human eye, and Schematic eye model is used to test the influence of ametropia in human eye on imaging quality. Annular aperture and black dot board are added into illumination system, so that the illumination system can eliminate stray light produced by corneal-reflected light and omentoscopic lens. Simulation results show that MTF of each visual field at the cut-off frequency of 90lp/mm is greater than 0.2, system distortion value is -2.7%, field curvature is less than 0.1 mm, radius of Airy disc is 3.25um. This system has a strong ability of chromatic aberration correction and focusing, and can image clearly for human fundus in which the range of diopters is from -10 D to +6 D(1 D = 1 m -1 ).

  19. Metrology camera system of prime focus spectrograph for Suburu telescope

    Science.gov (United States)

    Wang, Shiang-Yu; Chou, Richard C. Y.; Huang, Pin-Jie; Ling, Hung-Hsu; Karr, Jennifer; Chang, Yin-Chang; Hu, Yen-Sang; Hsu, Shu-Fu; Chen, Hsin-Yo; Gunn, James E.; Reiley, Dan J.; Tamura, Naoyuki; Takato, Naruhisa; Shimono, Atsushi

    2016-08-01

    The Prime Focus Spectrograph (PFS) is a new optical/near-infrared multi-fiber spectrograph designed for the prime focus of the 8.2m Subaru telescope. PFS will cover a 1.3 degree diameter field with 2394 fibers to complement the imaging capabilities of Hyper SuprimeCam. To retain high throughput, the final positioning accuracy between the fibers and observing targets of PFS is required to be less than 10 microns. The metrology camera system (MCS) serves as the optical encoder of the fiber motors for the configuring of fibers. MCS provides the fiber positions within a 5 microns error over the 45 cm focal plane. The information from MCS will be fed into the fiber positioner control system for the closed loop control. MCS will be located at the Cassegrain focus of Subaru telescope in order to cover the whole focal plane with one 50M pixel Canon CMOS camera. It is a 380mm Schmidt type telescope which generates a uniform spot size with a 10 micron FWHM across the field for reasonable sampling of the point spread function. Carbon fiber tubes are used to provide a stable structure over the operating conditions without focus adjustments. The CMOS sensor can be read in 0.8s to reduce the overhead for the fiber configuration. The positions of all fibers can be obtained within 0.5s after the readout of the frame. This enables the overall fiber configuration to be less than 2 minutes. MCS will be installed inside a standard Subaru Cassgrain Box. All components that generate heat are located inside a glycol cooled cabinet to reduce the possible image motion due to heat. The optics and camera for MCS have been delivered and tested. The mechanical parts and supporting structure are ready as of spring 2016. The integration of MCS will start in the summer of 2016. In this report, the performance of the MCS components, the alignment and testing procedure as well as the status of the PFS MCS will be presented.

  20. Practical assessment of veiling glare in camera lens system

    Directory of Open Access Journals (Sweden)

    Ivana Tomić

    2014-12-01

    Full Text Available Veiling glare can be defined as an unwanted or stray light in an optical system caused by internal reflections between elements of the camera lens. It leads to image fogging and degradation of both image density and contrast, diminishing its overall quality. Each lens is susceptible to veiling glare to some extent - sometimes it is negligible, but it most cases it leads to the visible defects in an image. Unlike the other flaws and errors, lens flare is not easy to correct. Hence, it is highly recommended to prevent it during the capturing phase, if possible. For some applications, it can also be useful to estimate the susceptibility to a lens glare i.e. the degree of a glare in the lens system. Few methods are usually used for this types of testing. Some of the methods are hard to implement and often do not lead to consistent results. In this paper, we assessed one relatively easy method for practical evaluation of veiling glare. Method contains three steps: creating an appropriate scene, capturing the target image and analyzing it. In order to evaluate its applicability, we tested four lenses for Nikon 700 digital camera. Lenses used were with the fixed focal length of 35 and 85 mm and differed by the coatings of their elements. Furthermore, we evaluated the influence of aperture on veiling glare value. It was shown that presented method is not applicable for testing the lenses with short focal length and that the new generation of lenses, equipped with Nano crystal coatings are less susceptible to veiling glare. Aperture did not affect veiling glare value significantly.

  1. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... on real-time motion planning of the camera. Moreover, the recasting of camera constraints into potential fields is visually more accessible to game designers and has the potential to be implemented as a plug-in to 3D level design and editing tools currently avail- able with games. Introduction...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  2. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  3. Improving photometric calibration of meteor video camera systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  4. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  5. Tolerance optimization of a mobile phone camera lens system.

    Science.gov (United States)

    Jung, Sangjin; Choi, Dong-Hoon; Choi, Byung-Lyul; Kim, Ju Ho

    2011-08-10

    In the manufacturing process for the lens system of a mobile phone camera, various types of assembly and manufacturing tolerances, such as tilt and decenter, should be appropriately allocated. Because these tolerances affect manufacturing cost and the expected optical performance, it is necessary to choose a systematic design methodology for determining optimal tolerances. In order to determine the tolerances that minimize production cost while satisfying the reliability constraints on important optical performance indices, we propose a tolerance design procedure for a lens system. A tolerance analysis is carried out using Latin hypercube sampling for evaluating the expected optical performance. The tolerance optimization is carried out using a function-based sequential approximate optimization technique that can reduce the computational burden and smooth numerical noise occurring in the optimization process. Using the proposed design approach, the optimal production cost was decreased by 28.3% compared to the initial cost while satisfying all the constraints on the expected optical performance. We believe that the tolerance analysis and design procedure presented in this study can be applied to the tolerance optimization of other systems.

  6. DESIGN AND IMPLEMENTATION OF A NOVEL PORTABLE 360° STEREO CAMERA SYSTEM WITH LOW-COST ACTION CAMERAS

    Directory of Open Access Journals (Sweden)

    D. Holdener

    2017-11-01

    Full Text Available The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  7. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    Science.gov (United States)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  8. RAMI analysis for ITER radial X-ray camera system

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Shijun, E-mail: sjqin@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Hu, Liqun; Chen, Kaiyun [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Barnsley, Robin; Sirinelli, Antoine [ITER Organization, Route Vinon sur Verdon, CS 90046, 13067, St. Paul lez Durance, Cedex (France); Song, Yuntao; Lu, Kun; Yao, Damao; Chen, Yebin; Li, Shi; Cao, Hongrui; Yu, Hong; Sheng, Xiuli [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China)

    2016-11-15

    Highlights: • The functional analysis of the ITER RXC system was performed. • A failure modes, effects and criticality analysis of the ITER RXC system was performed. • The reliability and availability of the ITER RXC system and its main functions were calculated. • The ITER RAMI approach was applied to the ITER RXC system for technical risk control in the preliminary design phase. - Abstract: ITER is the first international experimental nuclear fusion device. In the project, the RAMI approach (reliability, availability, maintainability and inspectability) has been adopted for technical risk control to mitigate all the possible failure of components in preparation for operation and maintenance. RAMI analysis of the ITER Radial X-ray Camera diagnostic (RXC) system during preliminary design phase was required, which insures the system with a very high performance to measure the X-ray emission and research the MHD of plasma with high accuracy on the ITER machine. A functional breakdown was prepared in a bottom-up approach, resulting in the system being divided into 3 main functions, 6 intermediate functions and 28 basic functions which are described using the IDEFØ method. Reliability block diagrams (RBDs) were prepared to calculate the reliability and availability of each function under assumption of operating conditions and failure data. Initial and expected scenarios were analyzed to define risk-mitigation actions. The initial availability of RXC system was 92.93%, while after optimization the expected availability was 95.23% over 11,520 h (approx. 16 months) which corresponds to ITER typical operation cycle. A Failure Modes, Effects and Criticality Analysis (FMECA) was performed to the system initial risk. Criticality charts highlight the risks of the different failure modes with regard to the probability of their occurrence and impact on operations. There are 28 risks for the initial state, including 8 major risks. No major risk remains after taking into

  9. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  10. Single camera photogrammetry system for EEG electrode identification and localization.

    Science.gov (United States)

    Baysal, Uğur; Sengül, Gökhan

    2010-04-01

    In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.

  11. Metrology Camera System Using Two-Color Interferometry

    Science.gov (United States)

    Dubovitsky, Serge; Liebe, Carl Christian; Peters, Robert; Lay, Oliver

    2007-01-01

    A metrology system that contains no moving parts simultaneously measures the bearings and ranges of multiple reflective targets in its vicinity, enabling determination of the three-dimensional (3D) positions of the targets with submillimeter accuracy. The system combines a direction-measuring metrology camera and an interferometric range-finding subsystem. Because the system is based partly on a prior instrument denoted the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor and because of its 3D capability, the system is denoted the MSTAR3D. Developed for use in measuring the shape (for the purpose of compensating for distortion) of large structures like radar antennas, it can also be used to measure positions of multiple targets in the course of conventional terrestrial surveying. A diagram of the system is shown in the figure. One of the targets is a reference target having a known, constant distance with respect to the system. The system comprises a laser for generating local and target beams at a carrier frequency; a frequency shifting unit to introduce a frequency shift offset between the target and local beams; a pair of high-speed modulators that apply modulation to the carrier frequency in the local and target beams to produce a series of modulation sidebands, the highspeed modulators having modulation frequencies of FL and FM; a target beam launcher that illuminates the targets with the target beam; optics and a multipixel photodetector; a local beam launcher that launches the local beam towards the multi-pixel photodetector; a mirror for projecting to the optics a portion of the target beam reflected from the targets, the optics being configured to focus the portion of the target beam at the multi-pixel photodetector; and a signal-processing unit connected to the photodetector. The portion of the target beam reflected from the targets produces spots on the multi-pixel photodetector corresponding to the targets, respectively, and the signal

  12. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  13. TOWARDS THE INFLUENCE OF A CAR WINDSHIELD ON DEPTH CALCULATION WITH A STEREO CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    A. Hanel

    2016-06-01

    Full Text Available Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.

  14. Towards the Influence of a CAR Windshield on Depth Calculation with a Stereo Camera System

    Science.gov (United States)

    Hanel, A.; Hoegner, L.; Stilla, U.

    2016-06-01

    Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.

  15. A low-cost dual-camera imaging system for aerial applicators

    Science.gov (United States)

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  16. DC drive system for cine/pulse cameras

    Science.gov (United States)

    Gerlach, R. H.; Sharpsteen, J. T.; Solheim, C. D.; Stoap, L. J.

    1977-01-01

    Camera-drive functions are separated mechanically into two groups which are driven by two separate dc brushless motors. First motor, a 90 deg stepper, drives rotating shutter; second electronically commutated motor drives claw and film transport. Shutter is made of one piece but has two openings for slow and fast exposures.

  17. A luminescence imaging system based on a CCD camera

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Markey, B.G.

    1997-01-01

    described here has a maximum spatial resolution of 17 mu m; though this may be varied under software control to alter the signal-to-noise ratio. The camera has been mounted on a Riso automated TL/OSL reader, and both the reader and the CCD are under computer control. In the near u.v and blue part...

  18. INCREMENTAL REAL-TIME BUNDLE ADJUSTMENT FOR MULTI-CAMERA SYSTEMS WITH POINTS AT INFINITY

    Directory of Open Access Journals (Sweden)

    J. Schneider

    2013-08-01

    Full Text Available This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1 multi-view cameras by taking the rigid transformation between the cameras into account, (2 omnidirectional cameras as it can handle arbitrary bundles of rays and (3 scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment w.r.t. time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.

  19. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  20. A Video Camera Road Sign System of the Early Warning from Collision with the Wild Animals

    Directory of Open Access Journals (Sweden)

    Matuska Slavomir

    2016-05-01

    Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.

  1. Evaluation of thermal cameras in quality systems according to ISO 9000 or EN 45000 standards

    Science.gov (United States)

    Chrzanowski, Krzysztof

    2001-03-01

    According to the international standards ISO 9001-9004 and EN 45001-45003 the industrial plants and the accreditation laboratories that implemented the quality systems according to these standards are required to evaluate an uncertainty of measurements. Manufacturers of thermal cameras do not offer any data that could enable estimation of measurement uncertainty of these imagers. Difficulties in determining the measurement uncertainty is an important limitation of thermal cameras for applications in the industrial plants and the cooperating accreditation laboratories that have implemented these quality systems. A set of parameters for characterization of commercial thermal cameras, a measuring set, some results of testing of these cameras, a mathematical model of uncertainty, and a software that enables quick calculation of uncertainty of temperature measurements with thermal cameras are presented in this paper.

  2. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  3. Performance analysis of 3-D shape measurement algorithm with a short baseline projector-camera system

    OpenAIRE

    Liu, Jianyang; Li, Youfu

    2014-01-01

    A number of works for 3-D shape measurement based on structured light have been well-studied in the last decades. A common way to model the system is to use the binocular stereovision-like model. In this model, the projector is treated as a camera, thus making a projector-camera-based system unified with a well-established traditional binocular stereovision system. After calibrating the projector and camera, a 3-D shape information is obtained by conventional triangulation. However, in such a...

  4. Smart Camera System-on-Chip Architecture for Real-Time Brush Based Interactive Painting Systems

    OpenAIRE

    Claesen, Luc; Vandoren, Peter; VAN LAERHOVEN, Tom; Motten, Andy; Di Fiore, Fabian; Van Reeth, Frank; Liao, Jing; Yu, Jinhui

    2012-01-01

    Interactive virtual paint systems are very useful in editing all kinds of graphics artwork. Because of the digital tracking of strokes, interactive editing operations such as save, redo, resize etc. are possible. The structure of artwork generated can be used for animation in artwork cartoons. A novel System-onChip Smart Camera architecture is presented that can be used for tracking infrared fiber based brushes as well as real brushes in real-time. A dedicated SoC hardware implementation ...

  5. Miniature magnetically anchored and controlled camera system for trocar-less laparoscopy.

    Science.gov (United States)

    Dong, Ding-Hui; Zhu, Hao-Yang; Luo, Yu; Zhang, Hong-Ke; Xiang, Jun-Xi; Xue, Fei; Wu, Rong-Qian; Lv, Yi

    2017-03-28

    To design a miniature magnetically anchored and controlled camera system to reduce the number of trocars which are required for laparoscopy. The system consists of a miniature magnetically anchored camera with a 30° downward angle, an external magnetically anchored unit, and a vision output device. The camera weighs 12 g, measures Φ10.5 mm × 55 mm and has two magnets, a vision model, a light source, and a metal hexagonal nut. To test the prototype, the camera was inserted through a 12-mm conventional trocar in an ex vivo real liver laparoscopic training system. A trocar-less laparoscopic cholecystectomy was performed 6 times using a 12-mm and a 5-mm conventional trocar. In addition, the same procedure was performed in four canine models. Both procedures were successfully performed using only two conventional laparoscopic trocars. The cholecystectomy was completed without any major complication in 42 min (38-45 min) in vitro and in 50 min (45-53 min) using an animal model. This camera was anchored and controlled by an external unit magnetically anchored on the abdominal wall. The camera could generate excellent image. with no instrument collisions. The camera system we designed provides excellent optics and can be easily maneuvered. The number of conventional trocars is reduced without adding technical difficulties.

  6. Auto-Guiding System for CQUEAN (Camera for QUasars in EArly uNiverse)

    OpenAIRE

    Kim, Eunbin; Park, Won-Kee; Jeong, Hyeonju; Kim, Jinyoung; Kuehne, John; Kim, Dong Han; Kim, Han Geun; Odoms, Peter S.; Chang, Seunghyuk; Im, Myungshin; Pak, Soojong

    2011-01-01

    To perform imaging observation of optically red objects such as high redshift quasars and brown dwarfs, the Center for the Exploration of the Origin of the Universe (CEOU) recently developed an optical CCD camera, Camera for QUasars in EArly uNiverse(CQUEAN), which is sensitive at 0.7-1.1 um. To enable observations with long exposures, we developed an auto-guiding system for CQUEAN. This system consists of an off-axis mirror, a baffle, a CCD camera, a motor and a differential decelerator. To ...

  7. Flexible decoupled camera and projector fringe projection system using inertial sensors

    Science.gov (United States)

    Stavroulakis, Petros; Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard

    2017-10-01

    Measurement of objects with complex geometry and many self-occlusions is increasingly important in many fields, including additive manufacturing. In a fringe projection system, the camera and the projector cannot move independently with respect to each other, which limits the ability of the system to overcome object self-occlusions. We demonstrate a fringe projection setup where the camera can move independently with respect to the projector, thus minimizing the effects of self-occlusion. The angular motion of the camera is tracked and recalibrated using an on-board inertial angular sensor, which can additionally perform automated point cloud registration.

  8. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    Science.gov (United States)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  9. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  10. Region-wide search and pursuit system using networked intelligent cameras

    Science.gov (United States)

    Komiya, Kazumi; Irisawa, Kouji

    2001-11-01

    This paper reports a study on new, region-wide search and pursuit system for missing objects such as stolen cars, wandering people, etc. By using image matching processes on the basis of the object properties such as color and shape, the intelligent camera can search the object. Then the camera transmits the properties to the next camera to pursue the object successively. The experimental results show that the system can judge 2 cars as search object among 40 cars under conditions of changing environment. Based on these data the proposed system can accomplish a fundamental step. Finally, research subjects have been picked up for advancement such as accurate shape extraction processing, camera structure for high speed processing and multimedia attributes such as sound.

  11. Study on the diagnostic system of scoliosis by using infrared camera.

    Science.gov (United States)

    Jeong, Jin-hyoung; Park, Eun-jeong; Cho, Chang-ok; Kim, Yoon-jeong; Lee, Sang-sik

    2015-01-01

    In this study, the radiation generated in the diagnosis of scoliosis, to solve the problems by using an infrared camera and an optical marker system that can diagnose scoliosis developed. System developed by the infrared camera attached to the optical spinal curvature is recognized as a marker to shoot the angle between the two optical markers are measured. Measurement of angle, we used the Cobb's Angle method used in the diagnosis of spinal scoliosis. We developed a software to be able to output to the screen using an infrared camera to diagnose spinal scoliosis. Software is composed of camera output unit was manufactured in Labview, angle measurement unit, in Cobb's Angle measurement unit. In the future, kyphosis, Hallux Valgus, such as the diagnosis of orthopedic disorders that require the use of a diagnostic system is expected case.

  12. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  13. Advanced camera image data acquisition system for Pi-of-the-Sky

    Science.gov (United States)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  14. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  15. Report on the Radiation Effects Testing of the Infrared and Optical Transition Radiation Camera Systems

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-04-20

    Presented in this report are the results tests performed at Argonne National Lab in collaboration with Los Alamos National Lab to assess the reliability of the critical 99Mo production facility beam monitoring diagnostics. The main components of the beam monitoring systems are two cameras that will be exposed to radiation during accelerator operation. The purpose of this test is to assess the reliability of the cameras and related optical components when exposed to operational radiation levels. Both X-ray and neutron radiation could potentially damage camera electronics as well as the optical components such as lenses and windows. This report covers results of the testing of component reliability when exposed to X-ray radiation. With the information from this study we provide recommendations for implementing protective measures for the camera systems in order to minimize the occurrence of radiation-induced failure within a ten month production run cycle.

  16. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  17. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  18. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  19. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Science.gov (United States)

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  20. Compton camera study for high efficiency SPECT and benchmark with Anger system

    Science.gov (United States)

    Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.

    2017-12-01

    Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application

  1. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    Science.gov (United States)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  2. Single chip system LSI for digital still camera signal processing; Doga taio digital still camera yo shingo shori one chip

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, T.; Okada, S.; Kobayashi, A.; Komura, Y.; Kiyozaki, K. [Sanyo Electric Co. Ltd., Osaka (Japan)

    1998-11-01

    This paper introduces the summary of development of a single chip system LSI for digital still camera (DSC) signal real-time processing, which can deal with animation. In developing the LSI, a DSC was identified as a system device, and the target was set to developing a system LSI capable of processing all of the signals from the DSC. In the real-time signal processing, signal processing of animated images and still images with less shutter waiting time was realized by mounting a dedicated M-JPEC core and by signal-processing contraction and elongation of the JPEG with the hardware at high speed. Writing and reading at higher speeds into and from image buffer memories to reduce the shutter waiting time and higher speed transfer of image data were realized by making a dual path architecture inside the LSI. Other functions performed by the software in the built-in RISC core include recording and replaying of voice, preparation of AVI files to replay the images on home-use TV sets, and a window function for DSC to synthesize still images. 7 refs., 8 figs., 2 tabs.

  3. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC; Little, William A.; /MMR-Technologies, Mountain View, CA; Powers, Jacob R; Schindler, Rafe H.; /SLAC; Spektor, Sam; /MMR-Technologies, Mountain View, CA

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results from a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)

  4. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  5. An evaluation metric for multiple camera tracking systems: the i-LIDS 5th scenario

    Science.gov (United States)

    Nilski, Adam

    2008-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's standard for Video Based Detection Systems (VBDS). The Home Office Scientific Development Branch (HOSDB) in partnership with the Centre for the Protection of National Infrastructure (CPNI) has now developed a fifth i-LIDS Scenario; Multiple Camera Tracking (MCT). The imagery contains various staged events of people walking through the camera views. A bounding box Ground Truth is provided with the imagery. A metric has been developed by HOSDB for evaluation of systems. The metric is based on precision and recall system output compared to the ground truth.

  6. Quality Analysis of Massive High-Definition Video Streaming in Two-Tiered Embedded Camera-Sensing Systems

    OpenAIRE

    Joongheon Kim; Eun-Seok Ryu

    2014-01-01

    This paper presents the quality analysis results of high-definition video streaming in two-tiered camera sensor network applications. In the camera-sensing system, multiple cameras sense visual scenes in their target fields and transmit the video streams via IEEE 802.15.3c multigigabit wireless links. However, the wireless transmission introduces interferences to the other links. This paper analyzes the capacity degradation due to the interference impacts from the camera-sensing nodes to the ...

  7. NIR spectrophotometric system based on a conventional CCD camera

    Science.gov (United States)

    Vilaseca, Meritxell; Pujol, Jaume; Arjona, Montserrat

    2003-05-01

    The near infrared spectral region (NIR) is useful in many applications. These include agriculture, the food and chemical industry, and textile and medical applications. In this region, spectral reflectance measurements are currently made with conventional spectrophotometers. These instruments are expensive since they use a diffraction grating to obtain monochromatic light. In this work, we present a multispectral imaging based technique for obtaining the reflectance spectra of samples in the NIR region (800 - 1000 nm), using a small number of measurements taken through different channels of a conventional CCD camera. We used methods based on the Wiener estimation, non-linear methods and principal component analysis (PCA) to reconstruct the spectral reflectance. We also analyzed, by numerical simulation, the number and shape of the filters that need to be used in order to obtain good spectral reconstructions. We obtained the reflectance spectra of a set of 30 spectral curves using a minimum of 2 and a maximum of 6 filters under the influence of two different halogen lamps with color temperatures Tc1 = 2852K and Tc2 = 3371K. The results obtained show that using between three and five filters with a large spectral bandwidth (FWHM = 60 nm), the reconstructed spectral reflectance of the samples was very similar to that of the original spectrum. The small amount of errors in the spectral reconstruction shows the potential of this method for reconstructing spectral reflectances in the NIR range.

  8. Whole-field thickness strain measurement using multiple camera digital image correlation system

    Science.gov (United States)

    Li, Junrui; Xie, Xin; Yang, Guobiao; Zhang, Boyang; Siebert, Thorsten; Yang, Lianxiang.

    2017-03-01

    Three Dimensional digital image correlation(3D-DIC) has been widely used by industry, especially for strain measurement. The traditional 3D-DIC system can accurately obtain the whole-field 3D deformation. However, the conventional 3D-DIC system can only acquire the displacement field on a single surface, thus lacking information in the depth direction. Therefore, the strain in the thickness direction cannot be measured. In recent years, multiple camera DIC (multi-camera DIC) systems have become a new research topic, which provides much more measurement possibility compared to the conventional 3D-DIC system. In this paper, a multi-camera DIC system used to measure the whole-field thickness strain is introduced in detail. Four cameras are used in the system. two of them are placed at the front side of the object, and the other two cameras are placed at the back side. Each pair of cameras constitutes a sub stereo-vision system and measures the whole-field 3D deformation on one side of the object. A special calibration plate is used to calibrate the system, and the information from these two subsystems is linked by the calibration result. Whole-field thickness strain can be measured using the information obtained from both sides of the object. Additionally, the major and minor strain on the object surface are obtained simultaneously, and a whole-field quasi 3D strain history is acquired. The theory derivation for the system, experimental process, and application of determining the thinning strain limit based on the obtained whole-field thickness strain history are introduced in detail.

  9. Design of comprehensive general maintenance service system of aerial reconnaissance camera

    Directory of Open Access Journals (Sweden)

    Li Xu

    2016-01-01

    Full Text Available Aiming at the problem of lack of security equipment for airborne reconnaissance camera and universal difference between internal and external field and model, the design scheme of comprehensive universal system based on PC-104 bus architecture and ARM wireless test module is proposed is proposed using the ATE design. The scheme uses the "embedded" technology to design the system, which meets the requirements of the system. By using the technique of classified switching, the hardware resources are reasonably extended, and the general protection of the various types of aerial reconnaissance cameras is realized. Using the concept of “wireless test”, the test interface is extended to realize the comprehensive protection of the aerial reconnaissance camera and the field. The application proves that the security system works stably, has good generality, practicability, and has broad application prospect.

  10. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    Energy Technology Data Exchange (ETDEWEB)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5/sup 0/, 20/sup 0/, and 60/sup 0/ field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented.

  11. Influence of Stereoscopic Camera System Alignment Error on the Accuracy of 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    L. Bolecek

    2015-06-01

    Full Text Available The article deals with the influence of inaccurate rotation of cameras in camera system alignment on 3D reconstruction accuracy. The accuracy of the all three spatial coordinates is analyzed for two alignments (setups of 3D cameras. In the first setup, a 3D system with parallel optical axes of the cameras is analyzed. In this stereoscopic setup, the deterministic relations are derived by the trigonometry and basic stereoscopic formulas. The second alignment is a generalized setup with cameras in arbitrary positions. The analysis of the situation in the general setup is closely related with the influence of errors of the points' correspondences. Therefore the relation between errors of points' correspondences and reconstruction of the spatial position of the point was investigated. This issue is very complex. The worst case analysis was executed with the use of Monte Carlo method. The aim is to estimate a critical situation and the possible extent of these errors. Analysis of the generalized system and derived relations for normal system represent a significant improvement of the spatial coordinates accuracy analysis. A practical experiment was executed which confirmed the proposed relations.

  12. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  13. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  14. Product Plan of New Generation System Camera "OLYMPUS PEN E-P1"

    Science.gov (United States)

    Ogawa, Haruo

    "OLYMPUS PEN E-P1", which is new generation system camera, is the first product of Olympus which is new standard "Micro Four-thirds System" for high-resolution mirror-less cameras. It continues good sales by the concept of "small and stylish design, easy operation and SLR image quality" since release on July 3, 2009. On the other hand, the half-size film camera "OLYMPUS PEN" was popular by the concept "small and stylish design and original mechanism" since the first product in 1959 and recorded sale number more than 17 million with 17 models. By the 50th anniversary topic and emotional value of the Olympus pen, Olympus pen E-P1 became big sales. I would like to explain the way of thinking of the product plan that included not only the simple functional value but also emotional value on planning the first product of "Micro Four-thirds System".

  15. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2011-01-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a Fourier Transform Infrared (FTIR Spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10 arc s for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disk center even in case of variable intensity distributions across the source due to cirrus or haze.

  16. An inexpensive compact automatic camera system for wildlife research

    Science.gov (United States)

    William R. Danielson; Richard M. DeGraaf; Todd K. Fuller

    1996-01-01

    This paper describes the design, conversion, and deployment of a reliable, compact, automatic multiple-exposure photographic system that was used to photograph nest predation events. This system may be the most versatile yet described in the literature because of its simplicity, portability, and dependability. The system was very reliable because it was designed around...

  17. Novel intraoperative near-infrared fluorescence camera system for optical image-guided cancer surgery.

    Science.gov (United States)

    Mieog, J Sven D; Vahrmeijer, Alexander L; Hutteman, Merlijn; van der Vorst, Joost R; Drijfhout van Hooff, Maurits; Dijkstra, Jouke; Kuppen, Peter J K; Keijzer, Rob; Kaijzel, Eric L; Que, Ivo; van de Velde, Cornelis J H; Löwik, Clemens W G M

    2010-08-01

    Current methods of intraoperative tumor margin detection using palpation and visual inspection frequently result in incomplete resections, which is an important problem in surgical oncology. Therefore, real-time visualization of cancer cells is needed to increase the number of patients with a complete tumor resection. For this purpose, near-infrared fluorescence (NIRF) imaging is a promising technique. Here we describe a novel, handheld, intraoperative NIRF camera system equipped with a 690 nm laser; we validated its utility in detecting and guiding resection of cancer tissues in two syngeneic rat models. The camera system was calibrated using an activated cathepsin-sensing probe (ProSense, VisEn Medical, Woburn, MA). Fluorescence intensity was strongly correlated with increased activated-probe concentration (R2= .997). During the intraoperative experiments, a camera exposure time of 10 ms was used, which provided the optimal tumor to background ratio. Primary mammary tumors (n = 20 tumors) were successfully resected under direct fluorescence guidance. The tumor to background ratio was 2.34 using ProSense680 at 10 ms camera exposure time. The background fluorescence of abdominal organs, in particular liver and kidney, was high, thereby limiting the ability to detect peritoneal metastases with cathepsin-sensing probes in these regions. In conclusion, we demonstrated the technical performance of this new camera system and its intraoperative utility in guiding resection of tumors.

  18. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  19. A Rotating Phantom: Evaluation Of Hard And Software For Gated Gamma Camera Systems In Nuclear Medicine.

    Science.gov (United States)

    Vanregemorter, J.; Deconinck, F.; Bossuyt, A.

    1986-06-01

    In this paper we describe a rotating dynamic phantom which allows quality control of hardware and software for gated gamma camera systems in nuclear medicine. The phantom not only allows simulation of a gated heart study but also testing of the response of the whole system to time frequencies.

  20. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  1. The ELETTRA Streak Camera System Set-Up and First Results

    CERN Document Server

    Ferianis, M

    2000-01-01

    At ELETTRA, a Streak Camera system has been installed and tested. The bunch length is a significant machine parameter to measure, as it allows a direct derivation of fundamental machine characteristics, like its broadband impedance. At ELETTRA the Light from a Storage Ring Dipole is delivered through an optical system to an Optical Laboratory where it can be observed and analysed. The Streak Camera is equipped with different timebases, allowing both single sweep and dual sweep operation modes, including the Synchroscan mode. The Synchroscan frequency equal to 250 MHz, which is half of the ELETTRA RF frequency, allows the acquisition of consecutive bunches, 2ns apart. To fully exploit the performances of the Streak Camera, an optical path has been arranged which includes a fast opto-electronic shutter. By doing so, the optical power deposited on the photo-cathode is reduced in the different ELETTRA fillings.

  2. A Distributed Wireless Camera System for the Management of Parking Spaces

    Directory of Open Access Journals (Sweden)

    Stanislav Vítek

    2017-12-01

    Full Text Available The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG feature descriptor and support vector machine (SVM classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  3. A Distributed Wireless Camera System for the Management of Parking Spaces.

    Science.gov (United States)

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  4. Dental plaque assessment lifelogging system using commercial camera for oral healthcare.

    Science.gov (United States)

    Kasai, Mai; Iijima, Yuka; Takemura, Hiroshi; Mizoguchi, Hiroshi; Ohshima, Tomoko; Satomi, Naho

    2016-08-01

    We present a system for estimating the dental plaque adhesion area using a commercial camera image for oral healthcare via management of the intraoral environment. In recent years, several studies have reported on the relationship between a general disease and a periodontal disease. Such studies mention that normalization of the intraoral environment by tooth brushing is the most important treatment in preventive dentistry. However, evaluation of individual tooth brushing skill is difficult. Some devices for automatically measuring the quantity of dental plaque have already been proposed for the teaching tool of tooth brushing. However, these devices have certain limitations, such as large size, requirement to fix the head position, and limited applicability in daily life. In this study, we propose a method for calculating the dental plaque adhesion area using a commercial camera and an intraoral camera. We also propose an evaluation method for the quantity of adhered dental plaque for replacing the Plaque Control Record (PCR). The relationship between PCR of the front teeth and that of all teeth was investigated by using the proposed method. The experimental results show that the proposed method can estimate the PCR of all teeth from the information of the front tooth. This method is not dependent on a particular camera system, and is applicable with many types of cameras, including smartphones. Therefore, it will be a useful tool in daily use for routine and sustainable management of the intraoral environment.

  5. Performance of Dual Depth Camera Motion Capture System for Athletes’ Biomechanics Analysis

    Directory of Open Access Journals (Sweden)

    An Wee Chang

    2017-01-01

    Full Text Available Motion capture system has recently being brought to light and drawn much attention in many fields of research, especially in biomechanics. Marker-based motion capture systems have been used as the main tool in capturing motion for years. Marker-based motion capture systems are very pricey, lab-based and beyond reach of many researchers, hence it cannot be applied to ubiquitous applications. The game however has changed with the introduction of depth camera technology, a markerless yet affordable motion capture system. By means of this system, motion capture has been promoted as more portable application and does not require substantial time in setting up the system. Limitation in terms of nodal coverage of single depth camera has widely accepted but the performance of dual depth camera system is still doubtful since it is expected to improve the coverage issue but at the same time has bigger issues on data merging and accuracy. This work appraises the accuracy performance of dual depth camera motion capture system specifically for athletes’ running biomechanics analysis. Kinect sensors were selected to capture motions of an athlete simultaneously in three-dimension, and fused the recorded data into an analysable data. Running was chosen as the biomechanics motion and interpreted in the form of angle-time, angleangle and continuous relative phase plot. The linear and angular kinematics were analysed and represented graphically. Quantitative interpretations of the result allowed the deep insight of the movement and joint coordination of the athlete. The result showed that the root-mean-square error of the Kinect sensor measurement to exact measurement data and rigid transformation were 0.0045 and 0.0077291 respectively. The velocity and acceleration of the subject were determined to be 3.3479 ms-1 and –4.1444 ms-2. The result showed that the dual Kinect camera motion capture system was feasible to perform athletes' biomechanics analysis.

  6. Pothole Detection System Using a Black-box Camera

    OpenAIRE

    Youngtae Jo; Seungki Ryu

    2015-01-01

    Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that ...

  7. Fall Prevention Shoes Using Camera-Based Line-Laser Obstacle Detection System

    OpenAIRE

    Tzung-Han Lin; Chi-Yun Yang; Wen-Pin Shih

    2017-01-01

    Fall prevention is an important issue particularly for the elderly. This paper proposes a camera-based line-laser obstacle detection system to prevent falls in the indoor environment. When obstacles are detected, the system will emit alarm messages to catch the attention of the user. Because the elderly spend a lot of their time at home, the proposed line-laser obstacle detection system is designed mainly for indoor applications. Our obstacle detection system casts a laser line, which passes ...

  8. Development of a stereo-optical camera system for monitoring tidal turbines

    Science.gov (United States)

    Joslin, James; Polagye, Brian; Parker-Stetter, Sandra

    2014-01-01

    The development, implementation, and testing of a stereo-optical imaging system suitable for environmental monitoring of a tidal turbine is described. This monitoring system is intended to provide real-time stereographic imagery in the near-field (animals and the turbine. A method for optimizing the stereo camera arrangement is given, along with a quantitative assessment of the system's ability to measure and track targets in three-dimensional space. Optical camera effectiveness is qualitatively evaluated under realistic field conditions to determine the range within which detection, discrimination, and classification of targets is possible. These field evaluations inform optimal system placement relative to the turbine rotor. Tests suggest that the stereographic cameras will likely be able to discriminate and classify targets at ranges up to 3.5 m and detect targets at ranges up to, and potentially beyond, 4.5 m. Future system testing will include the use of an imaging sonar ("acoustical camera") to evaluate behavioral disturbances associated with artificial lighting.

  9. The Color Splitting System for TV Cameras - XYZ Prism

    Directory of Open Access Journals (Sweden)

    E. Kostal

    2001-09-01

    Full Text Available One of the dominant aspects, which prejudices the quality of colorimage reproduction, is the first operation in TV chain - scanning. Upto this day, the color splitting system, working in RGB colorimetricsystem, is still entirely used. The existence of negative parts of thecolor matching functions r(l, g(l, b(l causes complications byoptical separation of partial pictures R, G, B in classic scanningsystem. It leads to distortion of reproduction of color images.However, the specific technical and scientific applications, where thecolor carries the substantial part of information (cosmic development,medicine, demand high fidelity of color reproduction. This articlesubmits the results of the design of the color splitting system workingin XYZ colorimetric system (next only XYZ prism. Shortly the way toobtain theoretical spectral reflectances of partial filters of XYZprism is described. Further, these filters are approximated by realoptical interference filters and the geometry of XYZ prism isestablished. Finally, the results of the colorimetric distortion testof proposed scanning system are stated.

  10. A Novel Camera Calibration Algorithm as Part of an HCI System: Experimental Procedure and Results

    Directory of Open Access Journals (Sweden)

    Sauer Kristal

    2006-02-01

    Full Text Available Camera calibration is an initial step employed in many computer vision applications for the estimation of camera parameters. Along with images of an arbitrary scene, these parameters allow for inference of the scene's metric information. This is a primary reason for camera calibration's significance to computer vision. In this paper, we present a novel approach to solving the camera calibration problem. The method was developed as part of a Human Computer Interaction (HCI System for the NASA Virtual GloveBox (VGX Project. Our algorithm is based on the geometric properties of perspective projections and provides a closed form solution for the camera parameters. Its accuracy is evaluated in the context of the NASA VGX, and the results indicate that our algorithm achieves accuracy similar to other calibration methods which are characterized by greater complexity and computational cost. Because of its reliability and wide variety of potential applications, we are confident that our calibration algorithm will be of interest to many.

  11. Ultrahigh-definition color video camera system with 4K-scanning lines

    Science.gov (United States)

    Mitani, Kohji; Sugawara, Masayuki; Shimamoto, Hiroshi; Yamashita, Takayuki; Okano, Fumio

    2003-05-01

    An experimental ultrahigh-definition color video camera system with 7680(H) × 4320(V) pixels has been developed using four 8-million-pixel CCDs. The 8-million-pixel CCD with a progressive scanning rate of 60 frames per second has 4046(H) × 2048(V) effective imaging pixels, each of which is 8.4 micron2. We applied the four-imager pickup method to increase the camera"s resolution. This involves attaching four CCDs to a special color-separation prism. Two CCDs are used for the green image, and the other two are used for red and blue. The spatial image sampling pattern of these CCDs to the optical image is equivalent to one with 32 million pixels in the Bayer pattern color filter. The prototype camera attains a limiting resolution of more than 2700 TV lines both horizontally and vertically, which is higher than that of an 8-million-CCD. The sensitivity of the camera is 2000 lux, F 2.8 at approx. 50 dB of dark-noise level on the HDTV format. Its other specifications are a dynamic range of 200%, a power consumption of about 600 W and a weight, with lens, of 76 kg.

  12. Single camera system for multi-wavelength fluorescent imaging in the heart.

    Science.gov (United States)

    Yamanaka, Takeshi; Arafune, Tatsuhiko; Shibata, Nitaro; Honjo, Haruo; Kamiya, Kaichiro; Kodama, Itsuo; Sakuma, Ichiro

    2012-01-01

    Optical mapping has been a powerful method to measure the cardiac electrophysiological phenomenon such as membrane potential(V(m)), intracellular calcium(Ca(2+)), and the other electrophysiological parameters. To measure two parameters simultaneously, the dual mapping system using two cameras is often used. However, the method to measure more than three parameters does not exist. To exploit the full potential of fluorescence imaging, an innovative method to measure multiple, more than three parameters is needed. In this study, we present a new optical mapping system which records multiple parameters using a single camera. Our system consists of one camera, custom-made optical lens units, and a custom-made filter wheel. The optical lens units is designed to focus the fluorescence light at filter position, and form an image on camera's sensor. To obtain optical signals with high quality, efficiency of light collection was carefully discussed in designing the optical system. The developed optical system has object space numerical aperture(NA) 0.1, and image space NA 0.23. The filter wheel was rotated by a motor, which allows filter switching corresponding with needed fluorescence wavelength. The camera exposure and filter switching were synchronized by phase locked loop, which allow this system to record multiple fluorescent signals frame by frame alternately. To validate the performance of this system, we performed experiments to observe V(m) and Ca(2+) dynamics simultaneously (frame rate: 125fps) with Langendorff perfused rabbit heart. Firstly, we applied basic stimuli to the heart base (cycle length: 500ms), and observed planer wave. The waveforms of V(m) and Ca(2+) show the same upstroke synchronized with cycle length of pacing. In addition, we recorded V(m) and Ca(2+) signals during ventricular fibrillation induced by burst pacing. According to these experiments, we showed the efficacy and availability of our method for cardiac electrophysiological research.

  13. Volumetric Diffuse Optical Tomography for Small Animals Using a CCD-Camera-Based Imaging System

    Directory of Open Access Journals (Sweden)

    Zi-Jing Lin

    2012-01-01

    Full Text Available We report the feasibility of three-dimensional (3D volumetric diffuse optical tomography for small animal imaging by using a CCD-camera-based imaging system with a newly developed depth compensation algorithm (DCA. Our computer simulations and laboratory phantom studies have demonstrated that the combination of a CCD camera and DCA can significantly improve the accuracy in depth localization and lead to reconstruction of 3D volumetric images. This approach may present great interests for noninvasive 3D localization of an anomaly hidden in tissue, such as a tumor or a stroke lesion, for preclinical small animal models.

  14. Digital Charge Coupled Device (CCD) Camera System Architecture

    Science.gov (United States)

    Babey, S. K.; Anger, C. D.; Green, B. D.

    1987-03-01

    We propose a modeling system for generic objects in order to recognize different objects from the same category with only one generic model. The representation consists of a prototype, represented by parts and their configuration. Parts are modeled by superquadric volumetric primitives which are combined via Boolean operations to form objects. Variations between objects within a category are described by allowable changes in structure and shape deformations of prototypical parts. Each prototypical part and relation has a set of associated features that can be recognized in the images. These features are used for selecting models from the model data base. The selected hypothetical models are then verified on the geometric level by deforming the prototype in allowable ways to match the data. We base our design of the modeling system upon the current psychological theories of categorization and of human visual perception.

  15. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    OpenAIRE

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing exp...

  16. Radiometric calibration of wide-field camera system with an application in astronomy

    Science.gov (United States)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  17. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  18. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  19. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  20. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    Science.gov (United States)

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. PMID:22319297

  1. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    Science.gov (United States)

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction.

  2. A fast 3D reconstruction system with a low-cost camera accessory.

    Science.gov (United States)

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  3. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    NARCIS (Netherlands)

    Simi, M.; Tolou, N.; Valdastri, P.; Herder, J.L.; Menciassi, A.; Dario, P.

    2012-01-01

    A novel compliant Magnetic Levitation System (MLS) for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The

  4. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  5. Results of On-Orbit Testing of an Extra-Vehicular Infrared Camera Inspection System

    Science.gov (United States)

    Howell, Patricia A.; Cramer, K. Elliott

    2007-01-01

    This paper will discuss an infrared camera inspection system that has been developed to allow astronauts to demonstrate the ability to inspect reinforced carbon-carbon (RCC) components on the space shuttle as part of extra-vehicular activities (EVA) while in orbit. Presented will be the performance of the EVA camera system coupled with solar heating for inspection of damaged RCC specimens and NDE standards. The data presented was acquired during space shuttle flights STS-121 and STS-115 as well during a staged EVA from the ISS. The EVA camera system was able to detect flatbottom holes as small as 2.54cm in diameter with 25% material loss. Results obtained are shown to be comparable to ground-based thermal inspections performed in the laboratory using the same camera and simulated solar heating. Data on both the time history of the specimen temperature and the ability of the inspection system to image defects due to impact will likewise be presented.

  6. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  7. The use of 16 mm movie cameras for evaluation of the Space Shuttle remote manipulator system

    Science.gov (United States)

    van Wijk, M. C.; Kratky, V.

    Six 16 mm movie cameras, installed in the payload bay of the Space Shuttle 'Columbia', are used to monitor the performance of the remote manipulator system during several flight missions. Calibration procedures carried out in the laboratory and on board of the Space Shuttle are described. The accuracy of the photogrammetrically compiled information and initial results are discussed.

  8. Video camera system for locating bullet holes in targets at a ballistics tunnel

    Science.gov (United States)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  9. Design of comprehensive general maintenance service system of aerial reconnaissance camera

    OpenAIRE

    Li Xu; Yu Xiang Bin; Zhan Ye; Zang Yan

    2016-01-01

    Aiming at the problem of lack of security equipment for airborne reconnaissance camera and universal difference between internal and external field and model, the design scheme of comprehensive universal system based on PC-104 bus architecture and ARM wireless test module is proposed is proposed using the ATE design. The scheme uses the "embedded" technology to design the system, which meets the requirements of the system. By using the technique of classified switching, the hardware resources...

  10. Hybrid Compton camera/coded aperture imaging system

    Science.gov (United States)

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  11. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  12. Accuracy and repeatability of joint angles measured using a single camera markerless motion capture system.

    Science.gov (United States)

    Schmitz, Anne; Ye, Mao; Shapiro, Robert; Yang, Ruigang; Noehren, Brian

    2014-01-22

    Markerless motion capture systems have developed in an effort to evaluate human movement in a natural setting. However, the accuracy and reliability of these systems remain understudied. Therefore, the goals of this study were to quantify the accuracy and repeatability of joint angles using a single camera markerless motion capture system and to compare the markerless system performance with that of a marker-based system. A jig was placed in multiple static postures with marker trajectories collected using a ten camera motion analysis system. Depth and color image data were simultaneously collected from a single Microsoft Kinect camera, which was subsequently used to calculate virtual marker trajectories. A digital inclinometer provided a measure of ground-truth for sagittal and frontal plane joint angles. Joint angles were calculated with marker data from both motion capture systems using successive body-fixed rotations. The sagittal and frontal plane joint angles calculated from the marker-based and markerless system agreed with inclinometer measurements by motion capture system to accurately measure lower extremity kinematics and provide a first step in using this technology to discern clinically relevant differences in the joint kinematics of patient populations. © 2013 Published by Elsevier Ltd.

  13. Optical system design of multi-spectral and large format color CCD aerial photogrammetric camera

    Science.gov (United States)

    Qian, Yixian; Sun, Tianxiang; Gao, Xiaodong; Liang, Wei

    2007-12-01

    Multi-spectrum and high spatial resolution is the vital problem for optical design of aerial photogrammetric camera all the time. It is difficult to obtain an outstanding optical system with high modulation transfer function (MTF) as a result of wide band. At the same time, for acquiring high qualified image, chromatic distortion in optical system must be expected to be controlled below 0.5 pixels; it is a trouble thing because of wide field and multi-spectrum. In this paper, MTF and band of the system are analyzed. A Russar type photogrammetric objective is chosen as the basic optical structure. A novel optical system is presented to solve the problem. The new optical photogrammetric system, which consists of panchromatic optical system and chromatic optical system, is designed. The panchromatic optical system, which can obtain panchromatic image, makes up of a 9k×9k large format CCD and high-accuracy photographic objective len, its focal length is 69.83mm, field angle is 60°×60°, the size of CCD pixels is 8.75um×8.75um, spectral scope is from 0.43um to 0.74um, modulation transfer function is all above 0.4 in whole field when spatial frequency is at 60lp/mm, distortion is less than 0.007%. In a chromatic optical system, three 2k×2k array CCDs combine individually three same photographic objectives, the high resolution chromatic image is acquired by the synthesis of red, green, blue image data information delivered by three CCD sensors. For the chromatic system, their focal length is 24.83mm and they have the same spectral range of 0.39um to 0.74um. A difference is that they are coated in different film on their protect glass. The pixel number is 2048 × 2048; its MTF exceeds 0.4 in full field when spatial frequency is 30lp/mm. The advantages of digital aerial photogrammetric camera comparison with traditional film camera are described. It is considered that the two development trends on digital aerial photogrammetric camera are high-spectral resolution and

  14. Design of video surveillance and tracking system based on attitude and heading reference system and PTZ camera

    Science.gov (United States)

    Yang, Jian; Xie, Xiaofang; Wang, Yan

    2017-04-01

    Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.

  15. UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER

    Directory of Open Access Journals (Sweden)

    M. Hillemann

    2017-08-01

    Full Text Available Unmanned Aerial Vehicle (UAV with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30 mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8° and 6.4 mm.

  16. Infrared Camera

    Science.gov (United States)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  17. An ebCMOS camera system for marine bioluminescence observation: The LuSEApher prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dominjon, A., E-mail: a.dominjon@ipnl.in2p3.fr [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Ageron, M. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Billault, M.; Brunner, J. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Calabria, P. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Chabanat, E. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Chaize, D.; Doan, Q.T.; Guerin, C.; Houles, J.; Vagneron, L. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France)

    2012-12-11

    The ebCMOS camera, called LuSEApher, is a marine bioluminescence recorder device adapted to extreme low light level. This prototype is based on the skeleton of the LUSIPHER camera system originally developed for fluorescence imaging. It has been installed at 2500 m depth off the Mediterranean shore on the site of the ANTARES neutrino telescope. The LuSEApher camera is mounted on the Instrumented Interface Module connected to the ANTARES network for environmental science purposes (European Seas Observatory Network). The LuSEApher is a self-triggered photo detection system with photon counting ability. The presentation of the device is given and its performances such as the single photon reconstruction, noise performances and trigger strategy are presented. The first recorded movies of bioluminescence are analyzed. To our knowledge, those types of events have never been obtained with such a sensitivity and such a frame rate. We believe that this camera concept could open a new window on bioluminescence studies in the deep sea.

  18. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    Science.gov (United States)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  19. A Positron CT Camera System Using Multiwire Proportional Chambers as Detectors

    Science.gov (United States)

    1989-05-18

    Beijing) Abstract This article reports on a positron computerized tomography camera syste:. using 7ultiwire proportional chambers ( MWPC ) as detectors...This system is composed of two high-density MWPC gamma-ray detectors, an electronic readout system and a computer foi data piocessing. Three...proportional chamber ( MWPC ) PECT is directly used in the field of solid body investigation in physics to measure Fermi surfaces as well as to determine Lhe

  20. OBLIQUE MULTI-CAMERA SYSTEMS – ORIENTATION AND DENSE MATCHING ISSUES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2014-03-01

    Full Text Available The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.. The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  1. A modified digital slit lamp camera system for transillumination photography of intraocular tumours.

    Science.gov (United States)

    Krohn, Jørgen; Kjersem, Bård

    2012-04-01

    To describe a new technique for transillumination photography of uveal melanoma and other intraocular tumours based on a simple modification of a standard digital slit lamp camera system. Transillumination imaging was performed with a digital slit lamp camera (Photo-Slit Lamp BX 900; Haag-Streit, Koeniz, Switzerland) modified by releasing the distal end of the background illumination fibre cable from its holder. The patient's eye was held open, and the head was positioned on the head and chin rest of the slit lamp. Transillumination was achieved by gently pressing the tip of the light fibre cable against the globe. The camera was then fired and the flash delivered through the cable while synchronising with the camera shutter. This technique was applied in five patients with ciliary body or anterior choroidal tumours. Photographs were of good diagnostic quality, making it possible to outline the tumour borders and evaluate any ciliary body involvement. No patient experienced discomfort or negative side effects. We recommend this technique in all cases where transillumination and photographic documentation of intraocular tumours are considered important.

  2. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  3. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  4. The Calibration of High-Speed Camera Imaging System for ELMs Observation on EAST Tokamak

    Science.gov (United States)

    Fu, Chao; Zhong, Fangchuan; Hu, Liqun; Yang, Jianhua; Yang, Zhendong; Gan, Kaifu; Zhang, Bin; East Team

    2016-09-01

    A tangential fast visible camera has been set up in EAST tokamak for the study of edge MHD instabilities such as ELM. To determine the 3-D information from CCD images, Tsai's two-stage technique was utilized to calibrate the high-speed camera imaging system for ELM study. By applying tiles of the passive stabilizers in the tokamak device as the calibration pattern, transformation parameters for transforming from a 3-D world coordinate system to a 2-D image coordinate system were obtained, including the rotation matrix, the translation vector, the focal length and the lens distortion. The calibration errors were estimated and the results indicate the reliability of the method used for the camera imaging system. Through the calibration, some information about ELM filaments, such as positions and velocities were obtained from images of H-mode CCD videos. supported by National Natural Science Foundation of China (No. 11275047), the National Magnetic Confinement Fusion Science Program of China (No. 2013GB102000)

  5. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  6. A digital underwater video camera system for aquatic research in regulated rivers

    Science.gov (United States)

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  7. Real-time computational camera system for high-sensitivity imaging by using combined long/short exposure

    Science.gov (United States)

    Sato, Satoshi; Okada, Yusuke; Azuma, Takeo

    2012-03-01

    In this study, we realize high-resolution (4K-format), small-size (1.43 x 1.43 μm pixel pitch size with a single imager) and high-sensitivity (four times higher sensitivity as compared to conventional imagers) video camera system. Our proposed system is the real time computational camera system that combined long exposure green pixels with short exposure red / blue pixels. We demonstrate that our proposed camera system is effective even in conditions of low illumination.

  8. Epipolar constraint of single-camera mirror binocular stereo vision systems

    Science.gov (United States)

    Chai, Xinghua; Zhou, Fuqiang; Chen, Xin

    2017-08-01

    Virtual binocular sensors, composed of a camera and catoptric mirrors, have become popular among machine vision researchers, owing to their high flexibility and compactness. Usually, the tested target is projected onto a camera at different reflection times, and feature matching is performed using one image. To establish the geometric principles of the feature-matching process of a mirror binocular stereo vision system, we proposed a single-camera model with the epipolar constraint for matching the mirrored features. The constraint between the image coordinates of the real target and its mirror reflection is determined, which can be used to eliminate nonmatching points in the feature-matching process of a mirror binocular system. To validate the epipolar constraint model and to evaluate its performance in practical applications, we performed realistic matching experiments and analysis using a mirror binocular stereo vision system. Our results demonstrate the feasibility of the proposed model, suggesting a method for considerable improvement of efficacy of the process for matching mirrored features.

  9. Design of refocusing system for a high-resolution space TDICCD camera with wide-field of view

    Science.gov (United States)

    Lv, Shiliang; Liu, Jinguo

    2015-10-01

    This paper describes the design and realization of a refocusing system for a space TDICCD camera of 2.2-meter focal length, which, features a three mirror anastigmatic(TMA) optical system along with 8 TDICCDs assemble at the focal plane, is high resolution and wide field of view. TDICCDs assemble is a kind of major method of acquiring wide field of view for space camera. In this way, the swath width reach 60km. First, the design of TMA optical system and its advantage of this space TDICCD camera was introduced; Then, the refocusing system as well as the technique of mechanical interleaving assemble for TDICCDs focal plane of this space camera was discussed in detail, At last, the refocusing system was measured. Experimental results indicated that the precision of the refocusing system is +/- 3.12μm(3σ), which satisfy the refocusing control system requirements of higher precision and stabilization.

  10. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    Science.gov (United States)

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  11. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  12. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    Science.gov (United States)

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%.

  13. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  14. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System

    Science.gov (United States)

    Chavez-Burbano, Patricia; Rabadan, Jose; Perez-Jimenez, Rafael

    2017-01-01

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios. PMID:28677613

  15. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System.

    Science.gov (United States)

    Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Rodríguez-Esparragón, Dionisio; Perez-Jimenez, Rafael

    2017-07-04

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios.

  16. INTER- AND INTRA-RATER RELIABILITY OF PERFORMANCE MEASURES COLLECTED WITH A SINGLE-CAMERA MOTION ANALYSIS SYSTEM.

    Science.gov (United States)

    Bates, Nathanial A; McPherson, April L; Berry, John D; Hewett, Timothy E

    2017-08-01

    Previous reliability investigations of single-camera three dimensional (3D) motion analysis systems have reported mixed results. The purpose of the current study was to determine the intra- and inter-rater reliability of a single-camera 3D motion analysis system for subject standing height, vertical jump height, and broad jump length. Experimental in vivo reliability study. Twelve subjects (age 20.6 ± 4.9 years) from a cohort that included high school to adult athletes who participated in sports at a recreational or competitive level entered and completed the study. Performance measurements were collected by a single-camera 3D motion analysis system and two human testers for standard clinical techniques. Inter- and intra-class correlation coefficients (ICC (2,k), ICC (2,1)) were determined. Intra-tester and inter-tester reliability were excellent (ICC ≥ 0.935) for single-camera system measured variables. Single-camera system measurements were slightly more reliable than clinical measurements for intra-tester ratings (ICC difference 0.020) for the standing broad jump. Single-camera system measurements were slightly less reliable than clinical measures for both intra- and inter-specimen standing height (mean ICC difference 0.003 and 0.043, respectively) and vertical jump height (mean ICC difference 0.017 and 0.036, respectively). The excellent reliability and previously demonstrated validity of the single-camera system along the anterior-posterior axis indicates that single-camera motion analysis may be a valid surrogate for clinically accepted manual measurements of performance in the horizontal plane. However, this single-camera 3D motion analysis system is likewise reliable, but inaccurate, for vertically oriented performance measurements. 2b.

  17. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    Science.gov (United States)

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  18. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    Directory of Open Access Journals (Sweden)

    Tao Yang

    2016-08-01

    Full Text Available This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV during a landing process. The system mainly include three novel parts: (1 Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2 Large scale outdoor camera array calibration module; and (3 Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS-denied environments.

  19. Security Camera System can be access into mobile with internet from remote place

    Directory of Open Access Journals (Sweden)

    Dr. Khanna SamratVivekanand Omprakash

    2012-01-01

    Full Text Available This paper represents how camera can captured the images and video into the database and then it may transformed to the mobile with help of Internet. Developing mobile applications how the data can be viewed on the mobile from the remote place. By assigning real IP to the storage device from ISP and connected to the internet . Developing mobile applications on windows mobile which runs only on the windows mobile . Wireless camera in terms of 4 , 8, 12, 16 are connected with the system. Windows based application develop for 4 , 8 , 12,16 channels to see at a time on desktop computer . The PC is connected with internet and having Client server application which is connected to the Windows Web hosting Server through the internet. With the help of ISP server we can assign IP to the Window Web Server with domain name . Domain name will be access from the world. By developing mobile applications on web we can access it on mobile . Separate setup of windows .exe develop for the Windows Mobile phone to access the information from the server. Client setup can be installed on the mobile and it fetches the data from server and server is based on real IP with domain name and connected with Internet . Digital Wireless cameras are connected & data is stored in Digital Video Recorder having 1 Terabyte of hard disk with different channel like 4, 8, 12,16. We can see Video output in mobile by installing the client setup or by accessing directly from web browser which supports the application for mobile. The beauty of this software is that we can access security camera system into the mobile with internet from remote place.

  20. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  1. Infrared Camera System for Visualization of IR-Absorbing Gas Leaks

    Science.gov (United States)

    Youngquist, Robert; Immer, Christopher; Cox, Robert

    2010-01-01

    embodiment would use a ratioed output signal to better represent the gas column concentration. An alternative approach uses a simpler multiplication of the filtered signal to make the filtered signal equal to the unfiltered signal at most locations, followed by a subtraction to remove all but the wavelength-specific absorption in the unfiltered sample. This signal processing can also reveal the net difference signal representing the leaking gas absorption, and allow rapid leak location, but signal intensity would not relate solely to gas absorption, as raw signal intensity would also affect the displayed signal. A second design choice is whether to use one camera with two images closely spaced in time, or two cameras with essentially the same view and time. The figure shows the two-camera version. This choice involves many tradeoffs that are not apparent until some detailed testing is done. In short, the tradeoffs involve the temporal changes in the field picture versus the pixel sensitivity curves and frame alignment differences with two cameras, and which system would lead to the smaller variations from the uncontrolled variables.

  2. A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

    Directory of Open Access Journals (Sweden)

    M. Hassanein

    2016-06-01

    Full Text Available In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated

  3. AR Supporting System for Pool Games Using a Camera-Mounted Handheld Display

    Directory of Open Access Journals (Sweden)

    Hideaki Uchiyama

    2008-01-01

    Full Text Available This paper presents a pool supporting system with a camera-mounted handheld display based on augmented reality technology. By using our system, users can get supporting information when they once capture a pool table. They can also watch visual aids through the display while they are capturing the table. First, our system estimates ball positions on the table with one image taken from an arbitrary viewpoint. Next, our system provides several shooting ways considering the next shooting way. Finally, our system presents visual aids such as shooting direction and ball behavior. Main purpose of our system is to estimate and analyze the distribution of balls and to present visual aids. Our system is implemented without special equipment such as a magnetic sensor or artificial markers. For evaluating our system, the accuracy of ball positions and the effectiveness of our supporting information are presented

  4. 3D digital image correlation using single color camera pseudo-stereo system

    Science.gov (United States)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  5. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    Science.gov (United States)

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  6. Vehicle Tracking and Counting System in Dusty Weather with Vibrating Camera Conditions

    Directory of Open Access Journals (Sweden)

    Nastaran Yaghoobi Ershadi

    2017-01-01

    Full Text Available Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles are a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc. or dusty weather in arid and semiarid regions or at night, among others. In this paper, we proposed a method to track and count vehicles in dusty weather with a vibrating camera. For this purpose, we used a background subtraction based strategy mixed with extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result. Our proposed method was tested on several video surveillance records in different conditions such as in dusty or foggy weather, with a vibrating camera, and on roads with medium-level traffic volumes. The results showed that the proposed method performed better than other previously published methods, including the Kalman filter or Gaussian model, in different traffic conditions.

  7. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    Science.gov (United States)

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  8. Study and Monitoring of Itinerant Tourism along the Francigena Route, by Camera Trapping System

    Directory of Open Access Journals (Sweden)

    Gianluca Bambi

    2017-01-01

    Full Text Available Tourism along the Via Francigena is a growing phenomenon. It is important to develop a direct survey of path’s users (pilgrims, tourists travel, day-trippers, etc. able to define user’s profiles, phenomenon extent, and its evolution over time. This in order to develop possible actions to promote the socio-economic impact on rural areas concerned. With this research, we propose the creation of a monitoring network based on camera trapping system to estimate the number of tourists in a simple and expeditious way. Recently, the camera trapping, as well as the faunal field, is finding wide use even in population surveys. An innovative application field is the one in the tourist sector, becoming the basis of statistical and planning analysis. To carry out a survey of the pilgrims/tourists, we applied this type of sampling method. It is an interesting method since it allows to obtain data about type and number of users. The application of camera trapping along the Francigena allows to obtain several information about users profiles, such as sex, age, average lengths of pilgrimages, type of journey (by foot, by horseback or by bike, in a continuous time period distributed in the tourist months of the 2014.

  9. Autonomous Gait Event Detection with Portable Single-Camera Gait Kinematics Analysis System

    Directory of Open Access Journals (Sweden)

    Cheng Yang

    2016-01-01

    Full Text Available Laboratory-based nonwearable motion analysis systems have significantly advanced with robust objective measurement of the limb motion, resulting in quantified, standardized, and reliable outcome measures compared with traditional, semisubjective, observational gait analysis. However, the requirement for large laboratory space and operational expertise makes these systems impractical for gait analysis at local clinics and homes. In this paper, we focus on autonomous gait event detection with our bespoke, relatively inexpensive, and portable, single-camera gait kinematics analysis system. Our proposed system includes video acquisition with camera calibration, Kalman filter + Structural-Similarity-based marker tracking, autonomous knee angle calculation, video-frame-identification-based autonomous gait event detection, and result visualization. The only operational effort required is the marker-template selection for tracking initialization, aided by an easy-to-use graphic user interface. The knee angle validation on 10 stroke patients and 5 healthy volunteers against a gold standard optical motion analysis system indicates very good agreement. The autonomous gait event detection shows high detection rates for all gait events. Experimental results demonstrate that the proposed system can automatically measure the knee angle and detect gait events with good accuracy and thus offer an alternative, cost-effective, and convenient solution for clinical gait kinematics analysis.

  10. Fall Prevention Shoes Using Camera-Based Line-Laser Obstacle Detection System

    Directory of Open Access Journals (Sweden)

    Tzung-Han Lin

    2017-01-01

    Full Text Available Fall prevention is an important issue particularly for the elderly. This paper proposes a camera-based line-laser obstacle detection system to prevent falls in the indoor environment. When obstacles are detected, the system will emit alarm messages to catch the attention of the user. Because the elderly spend a lot of their time at home, the proposed line-laser obstacle detection system is designed mainly for indoor applications. Our obstacle detection system casts a laser line, which passes through a horizontal plane and has a specific height to the ground. A camera, whose optical axis has a specific inclined angle to the plane, will observe the laser pattern to obtain the potential obstacles. Based on this configuration, the distance between the obstacles and the system can be further determined by a perspective transformation called homography. After conducting the experiments, critical parameters of the algorithms can be determined, and the detected obstacles can be classified into different levels of danger, causing the system to send different alarm messages.

  11. Fall Prevention Shoes Using Camera-Based Line-Laser Obstacle Detection System.

    Science.gov (United States)

    Lin, Tzung-Han; Yang, Chi-Yun; Shih, Wen-Pin

    2017-01-01

    Fall prevention is an important issue particularly for the elderly. This paper proposes a camera-based line-laser obstacle detection system to prevent falls in the indoor environment. When obstacles are detected, the system will emit alarm messages to catch the attention of the user. Because the elderly spend a lot of their time at home, the proposed line-laser obstacle detection system is designed mainly for indoor applications. Our obstacle detection system casts a laser line, which passes through a horizontal plane and has a specific height to the ground. A camera, whose optical axis has a specific inclined angle to the plane, will observe the laser pattern to obtain the potential obstacles. Based on this configuration, the distance between the obstacles and the system can be further determined by a perspective transformation called homography. After conducting the experiments, critical parameters of the algorithms can be determined, and the detected obstacles can be classified into different levels of danger, causing the system to send different alarm messages.

  12. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F.

    1998-01-27

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.

  13. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    Directory of Open Access Journals (Sweden)

    Idowu Ayoola

    2015-09-01

    Full Text Available A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.

  14. A three-dimensional vision by off-shelf system with multi-cameras.

    Science.gov (United States)

    Luh, J Y; Klaasen, J A

    1985-01-01

    A three-dimnensional vision system for on-line operation that aids a collision avoidance system for an industrial robot is developed. Because of the real-time requirement, the process that locates and describes the obstacles must be fast. To satisfy the safety requirement, the obstacle model should always contain the physical obstacle entirely. This condition leads to the bounding box description of the obstacle, which is simple for the computer to process. The image processing is performed by a Machine Intelligence Corporation VS-100 machine vision system. The control and object perception is performed by the developed software on a host Digital Equipment Corporation VAX 11/780 Computer. The resultant system outputs a file of the locations and bounding descriptions for each object found. When the system is properly calibrated, the bounding descriptions always completely envelop the obstacle. The response time is data-dependent. When using two cameras and processed on UNIX time sharing mode, the average response time will be less than 2 s if eight or fewer objects are present. When using all three cameras, the average response time will be less than 4 s if eight or fewer objects are present.

  15. A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests

    Directory of Open Access Journals (Sweden)

    Yousok Kim

    2013-09-01

    Full Text Available Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS. The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape of the test specimen using system identification methods (frequency domain decomposition, FDD. By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape with the 3D measurements.

  16. A new position measurement system using a motion-capture camera for wind tunnel tests.

    Science.gov (United States)

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-09-13

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements.

  17. BEAGLEBOARD EMBEDDED SYSTEM FOR ADAPTIVE TRAFFIC LIGHT CONTROL SYSTEM WITH CAMERA SENSOR

    Directory of Open Access Journals (Sweden)

    Muhammad Febrian Rachmadi

    2012-07-01

    Full Text Available Traffic is one of the most important aspects in human daily life because traffic affects smoothness of capital flows, logistics, and other community activities. Without appropriate traffic light control system, possibility of traffic congestion will be very high and hinder people’s life in urban areas. Adaptive traffic light control system can be used to solve traffic congestions in an intersection because it can adaptively change the durations of green light each lane in an intersection depend on traffic density. The proposed adaptive traffic light control system prototype uses Beagleboard-xM, CCTV camera, and AVR microcontrollers. We use computer vision technique to obtain information on traffic density combining Viola-Jones method with Kalman Filter method. To calculate traffic light time of each traffic light in intersection, we use Distributed Constraint Satisfaction Problem (DCSP. From implementations and experiments results, we conclude that BeagleBoard-xM can be used as main engine of adaptive traffic light control system with 91.735% average counting rate. Lalu intas adalah salah satu aspek yang paling penting dalam kehidupan sehari-hari manusia karena lalu lintas memengaruhi kelancaran arus modal, logistik, dan kegiatan masyarakat lainnya. Tanpa sistem kontrol lampu lalu lintas yang memadai, kemungkinan kemacetan lalu lintas akan sangat tinggi dan menghambat kehidupan masyarakat di perkotaan. Sistem kontrol lampu lalu lintas adaptif dapat digunakan untuk memecahkan kemacetan lalu lintas di persimpangan karena dapat mengubah durasi lampu hijau di setiap persimpangan jalan tergantung pada kepadatan lalu lintas. Prototipe sistem kontrol lampu lalu lintas menggunakan BeagleBoard-XM, kamera CCTV, dan mikrokontroler AVR. Peneliti menggunakan teknik computer vision untuk mendapatkan informasi tentang kepadatan lalu lintas dengan menggabungkan metode Viola-Jones dan metode Filter Kalman. Untuk menghitung waktu setiap lampu lalu lintas

  18. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y., E-mail: mejia_famerp@yahoo.com.b [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Biologia Molecular; Castro, A.A. de; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica; Leite, J.P. [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Fac. de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Braga, J. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Astrofisica

    2010-11-15

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology. (author)

  19. The cooling control system for focal plane assembly of astronomical satellite camera based on TEC

    Science.gov (United States)

    He, Yuqing; Du, Yunfei; Gao, Wei; Li, Baopeng; Fan, Xuewu; Yang, Wengang

    2017-02-01

    The dark current noise existing in the CCD of the astronomical observation camera has a serious influence on its working performance, reducing the working temperature of CCD can suppress the influence of dark current effectively. By analyzing the relationship between the CCD chip and the dark current noise, the optimum working temperature of the red band CCD focal plane is identified as -75°. According to the refrigeration temperature, a cooling control system for focal plane based on a thermoelectric cooler (TEC) was designed. It is required that the system can achieve high precision temperature control for the target. In the cooling control system, the 80C32 microcontroller was used as its systematic core processor. The advanced PID control algorithm is adopted to control the temperature of the top end of TEC. The bottom end of the TEC setting a constant value according to the target temperature used to assist the upper TEC to control the temperature. The experimental results show that the cooling system satisfies the requirements of the focal plane for the astronomical observation camera, it can reach the working temperature of -75° and the accuracy of ±2°.

  20. A Programmable Aerial Multispectral Camera System for In-Season Crop Biomass and Nitrogen Content Estimation

    Directory of Open Access Journals (Sweden)

    Jakob Geipel

    2016-01-01

    Full Text Available The study introduces a prototype multispectral camera system for aerial estimation of above-ground biomass and nitrogen (N content in winter wheat (Triticum aestivum L.. The system is fully programmable and designed as a lightweight payload for unmanned aircraft systems (UAS. It is based on an industrial multi-sensor camera and a customizable image processing routine. The system was tested in a split fertilized N field trial at different growth stages in between the end of stem elongation and the end of anthesis. The acquired multispectral images were processed to normalized difference vegetation index (NDVI and red-edge inflection point (REIP orthoimages for an analysis with simple linear regression models. The best results for the estimation of above-ground biomass were achieved with the NDVI (R 2 = 0.72–0.85, RMSE = 12.3%–17.6%, whereas N content was estimated best with the REIP (R 2 = 0.58–0.89, RMSE = 7.6%–11.7%. Moreover, NDVI and REIP predicted grain yield at a high level of accuracy (R 2 = 0.89–0.94, RMSE = 9.0%–12.1%. Grain protein content could be predicted best with the REIP (R 2 = 0.76–0.86, RMSE = 3.6%–4.7%, with the limitation of prediction inaccuracies for N-deficient canopies.

  1. Proposed patient motion monitoring system using feature point tracking with a web camera.

    Science.gov (United States)

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  2. On-Line Detection of Defects on Fruit by Machinevision Systems Based on Three-Color-Cameras Systems

    Science.gov (United States)

    Xul, Qiaobao; Zou, Xiaobo; Zhao, Jiewen

    How to identify apple stem-ends and calyxes from defects is still a challenging project due to the complexity of the process. It is know that the stem-ends and calyxes could not appear at the same image. Therefore, a contaminated apple distinguishing method is developed in this article. That is, if there are two or more doubtful blobs on an applés image, the apple is contaminated one. There is no complex imaging process and pattern recognition in this method, because it is only need to find how many blobs (including the stem-ends and calyxes) in an applés image. Machine vision systems which based 3 color cameras are presented in this article regarding the online detection of external defects. On this system, the fruits placed on rollers are rotating while moving, and each camera which placed on the line grabs 3 images from an apple. After the apple segmented from the black background by multi-thresholds method, defect's segmentation and counting is performed on the applés images. Good separation between normal and contaminated apples was obtained for threecamera system (94.5%), comparing to one-camera system (63.3%), twocamera system (83.7%). The disadvantage of this method is that it could not distinguish different defects types. Defects of apples, such as bruising, scab, fungal growth, and disease, are treated as the same.

  3. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    Science.gov (United States)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  4. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  5. Deployable Camera (DCAM3) System for Observation of Hayabusa2 Impact Experiment

    Science.gov (United States)

    Sawada, Hirotaka; Ogawa, Kazunori; Shirai, Kei; Kimura, Shinichi; Hiromori, Yuichi; Mimasu, Yuya

    2017-07-01

    An asteroid exploration probe "Hayabusa2", that was developed by Japan Aerospace Exploration Agency (JAXA), was launched on December 3rd, 2014 to challenge complicated and accurate operations during the mission phase around the C-type asteroid 162137 Ryugu (1999 JU3) (Tsuda et al. in Acta Astron. 91:356-362, 2013). An impact experiment on a surface of the asteroid will be conducted using the Small Carry-on Impactor (SCI) system, which will be the world's first artificial crater creation experiment on asteroids (Saiki et al. in Proc. International Astronautical Congress, IAC-12.A3.4.8, 2012, Acta Astron. 84:227-236, 2013a; Proc. International Symposium on Space Technology and Science, 2013b). We developed a new micro Deployable CAMera (DCAM3) system for remote observations of the impact phenomenon applying our conventional DCAM technology that is one of the smallest probes in space missions and gained a great success in past Japanese mission IKAROS (Interplanetary Kite-craft Accelerated by Radiation Of the Sun). DCAM3 is a miniaturized separable unit that contains two cameras and radio communication devices for transmission image data to the mothership "Hayabusa2", and it observes the impact experiment at an unsafe region in where the "Hayabusa2" is difficult to stay because of a risk of exploding and impacting debris hitting. In this paper, we report details of the DCAM3 system and development results as well as our mission plan for the DCAM3 observation during the SCI experiment.

  6. Extraction of character areas from digital camera based color document images and OCR system

    Science.gov (United States)

    Chung, Y. K.; Chi, S. Y.; Bae, K. S.; Kim, K. K.; Jang, D.; Kim, K. C.; Choi, Y. W.

    2005-09-01

    When document images are obtained from digital cameras, many imaging problems have to be solved for better extraction of characters from the images. Variation of illumination intensity sensitively affects to color values. A simple colored document image could be converted to a monochrome image by a traditional method and then a binarization algorithm is used. But this method is not stably working to the variation of illumination because sensitivity of colors to variation of illumination. For narrowly distributed colors, the conversion is not working well. Secondly, in case that the number of colors is more than two, it is not easy to figure out which color is for character and which others are for background. This paper discusses about an extraction method from a colored document image using a color process algorithm based on characteristics of color features. Variation of intensities and color distribution are used to classify character areas and background areas. A document image is segmented into several color groups and similar color groups are merged. In final step, only two colored groups are left for the character and background. The extracted character areas from the document images are entered into optical character recognition system. This method solves a color problem, which comes from traditional scanner based OCR systems. This paper also describes the OCR system for character conversion of a colored document image. Our method is working for the colored document images of cellular phones and digital cameras in real world.

  7. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    Science.gov (United States)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  8. Single camera imaging system for color and near-infrared fluorescence image guided surgery.

    Science.gov (United States)

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-08-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 μM can be detected when the excitation irradiance is 3.92 mW/cm(2) at an exposure time of 10 ms.

  9. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  10. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2017-11-03

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  11. Dual-camera system for high-speed imaging in particle image velocimetry

    CERN Document Server

    Hashimoto, K; Hara, T; Onogi, S; Mouri, H

    2012-01-01

    Particle image velocimetry is an important technique in experimental fluid mechanics, for which it has been essential to use a specialized high-speed camera. However, the high speed is at the expense of other performances of the camera, i.e., sensitivity and image resolution. Here, we demonstrate that the high-speed imaging is also possible with a pair of still cameras.

  12. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  13. STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    A. Hanel

    2017-05-01

    Full Text Available Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle

  14. Airborne Camera System for Real-Time Applications - Support of a National Civil Protection Exercise

    Science.gov (United States)

    Gstaiger, V.; Romer, H.; Rosenbaum, D.; Henkel, F.

    2015-04-01

    In the VABENE++ project of the German Aerospace Center (DLR), powerful tools are being developed to aid public authorities and organizations with security responsibilities as well as traffic authorities when dealing with disasters and large public events. One focus lies on the acquisition of high resolution aerial imagery, its fully automatic processing, analysis and near real-time provision to decision makers in emergency situations. For this purpose a camera system was developed to be operated from a helicopter with light-weight processing units and microwave link for fast data transfer. In order to meet end-users' requirements DLR works close together with the German Federal Office of Civil Protection and Disaster Assistance (BBK) within this project. One task of BBK is to establish, maintain and train the German Medical Task Force (MTF), which gets deployed nationwide in case of large-scale disasters. In October 2014, several units of the MTF were deployed for the first time in the framework of a national civil protection exercise in Brandenburg. The VABENE++ team joined the exercise and provided near real-time aerial imagery, videos and derived traffic information to support the direction of the MTF and to identify needs for further improvements and developments. In this contribution the authors introduce the new airborne camera system together with its near real-time processing components and share experiences gained during the national civil protection exercise.

  15. Parkinson's disease assessment based on gait analysis using an innovative RGB-D camera system.

    Science.gov (United States)

    Rocha, Ana Patrícia; Choupina, Hugo; Fernandes, José Maria; Rosas, Maria José; Vaz, Rui; Silva Cunha, João Paulo

    2014-01-01

    Movement-related diseases, such as Parkinson's disease (PD), progressively affect the motor function, many times leading to severe motor impairment and dramatic loss of the patients' quality of life. Human motion analysis techniques can be very useful to support clinical assessment of this type of diseases. In this contribution, we present a RGB-D camera (Microsoft Kinect) system and its evaluation for PD assessment. Based on skeleton data extracted from the gait of three PD patients treated with deep brain stimulation and three control subjects, several gait parameters were computed and analyzed, with the aim of discriminating between non-PD and PD subjects, as well as between two PD states (stimulator ON and OFF). We verified that among the several quantitative gait parameters, the variance of the center shoulder velocity presented the highest discriminative power to distinguish between non-PD, PD ON and PD OFF states (p = 0.004). Furthermore, we have shown that our low-cost portable system can be easily mounted in any hospital environment for evaluating patients' gait. These results demonstrate the potential of using a RGB-D camera as a PD assessment tool.

  16. A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software

    Directory of Open Access Journals (Sweden)

    S. H. Oh

    2007-12-01

    Full Text Available We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512, KAF-1602E (15367times;1024, KAF-3200E (2184×1472 made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

  17. Observation of Lunar Impact Flashes with the SPOSH Camera: System Parameters and Expected Performance

    Science.gov (United States)

    Luther, R.; Margonis, A.; Oberst, J.; Sohl, F.; Flohrer, J.

    2013-09-01

    Observations of meteors in the atmosphere of the Earth have a long historic tradition and brought up knowledge of meteoroid population and streams in near Earth space (amongst others). Only recently observations of meteoroid impacts on the dark side of the Moon became technically possible. Since the first confirmed Earth based observation of a lunar impact flash in 1999 [e.g.2] more than 50 impact flashes have been registered [1]. Meteoroids bombarding the Moon are not slowed down by an atmosphere and impact with high velocities of up to 70 km/s, causing a light flash of about 10 to 100 ms duration. Continuous observations of the dark hemisphere of the Moon enable the possibility to improve data of the meteoroid population as well as to determine impact time and location which can be used for seismic analysis and interior structure determination. Therefore, it is important to study the various system parameters that determine the possibility of a successful lunar impact flash detection, which we have implemented by numeric simulations. In particular, we want to evaluate the performance of the camera head of the SPOSH camera system [3] attached to a telescope.

  18. Camera based low-cost system to monitor hydrological parameters in small catchments

    Science.gov (United States)

    Eltner, Anette; Sardemann, Hannes; Kröhnert, Melanie; Schwalbe, Ellen

    2017-04-01

    Gauging stations in small catchments to measure hydrological parameters are usually solely installed at few selected locations. Thus, extreme events that can evolve rapidly, particularly in small catchments (especially in mountainous areas), potentially causing severe damage, are insufficiently documented eventually leading to difficulties of modeling and forecasting of these events. A conceptual approach using a low-cost camera based alternative is introduced to measure water level, flow velocity and changing river cross sections. Synchronized cameras are used for 3D reconstruction of the water surface, enabling the location of flow velocity vectors measured in video sequences. Furthermore, water levels are measured automatically using an image based approach originally developed for smartphone applications. Additional integration of a thermal sensor can increase the speed and reliability of the water level extraction. Finally, the reconstruction of the water surface as well as the surrounding topography allows for the detection of changing morphology. The introduced approach can help to increase the density of the monitoring system of hydrological parameters in (remote) small catchments and subsequently might be used as warning system for extreme events.

  19. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    Science.gov (United States)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  20. Handbook of camera monitor systems the automotive mirror-replacement technology based on ISO 16505

    CERN Document Server

    2016-01-01

    This handbook offers a comprehensive overview of Camera Monitor Systems (CMS), ranging from the ISO 16505-based development aspects to practical realization concepts. It offers readers a wide-ranging discussion of the science and technology of CMS as well as the human-interface factors of such systems. In addition, it serves as a single reference source with contributions from leading international CMS professionals and academic researchers. In combination with the latest version of UN Regulation No. 46, the normative framework of ISO 16505 permits CMS to replace mandatory rearview mirrors in series production vehicles. The handbook includes scientific and technical background information to further readers’ understanding of both of these regulatory and normative texts. It is a key reference in the field of automotive CMS for system designers, members of standardization and regulation committees, engineers, students and researchers.

  1. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  2. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    Energy Technology Data Exchange (ETDEWEB)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji [Department of Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan)

    2016-04-15

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.

  3. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system.

    Science.gov (United States)

    Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji

    2016-04-01

    Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.

  4. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  5. A specialized motion capture system for real-time analysis of mandibular movements using infrared cameras.

    Science.gov (United States)

    Furtado, Daniel Antônio; Pereira, Adriano Alves; Andrade, Adriano de Oliveira; Bellomo, Douglas Peres; da Silva, Marlete Ribeiro

    2013-02-22

    In the last years, several methods and devices have been proposed to record the human mandibular movements, since they provide quantitative parameters that support the diagnosis and treatment of temporomandibular disorders. The techniques currently employed suffer from a number of drawbacks including high price, unnatural to use, lack of support for real-time analysis and mandibular movements recording as a pure rotation. In this paper, we propose a specialized optical motion capture system, which causes a minimum obstruction and can support 3D mandibular movement analysis in real-time. We used three infrared cameras together with nine reflective markers that were placed at key points of the face. Some classical techniques are suggested to conduct the camera calibration and three-dimensional reconstruction and we propose some specialized algorithms to automatically recognize our set of markers and track them along a motion capture session. To test the system, we developed a prototype software and performed a clinical experiment in a group of 22 subjects. They were instructed to execute several movements for the functional evaluation of the mandible while the system was employed to record them. The acquired parameters and the reconstructed trajectories were used to confirm the typical function of temporomandibular joint in some subjects and to highlight its abnormal behavior in others. The proposed system is an alternative to the existing optical, mechanical, electromagnetic and ultrasonic-based methods, and intends to address some drawbacks of currently available solutions. Its main goal is to assist specialists in diagnostic and treatment of temporomandibular disorders, since simple visual inspection may not be sufficient for a precise assessment of temporomandibular joint and associated muscles.

  6. Dynamics of the shallow plumbing system investigated from borehole strainmeters and cameras, Stromboli volcano case study

    Science.gov (United States)

    Bonaccorso, Alessandro; Calvari, Sonia; Linde, Alan; Sacks, Selwyn; Boschi, Enzo

    2013-04-01

    The 15 March 2007 Vulcanian paroxysm at Stromboli volcano was recorded by several instruments that allowed describing the eruptive sequence and unravelling the processes in the upper feeding system. Among the devices installed on the island, two borehole strainmeters recorded unique signals not fully explored before. Here we present an analysis of these signals together with the time-lapse images from a monitoring system comprising both infrared and visual cameras. The two strainmeter signals display an initial phase of pressure growth in the feeding system lasting ~2 min. This is followed by 25 s of low-amplitude oscillations of the two signals, that we interpret as a strong step-like overpressure building up in the uppermost conduit by the gas-rich magma accumulating below a thick pile of rock produced by crater rim collapses. This overpressure caused shaking of the ground, and triggered a number of small landslides of the inner crater rim recorded by the monitoring cameras. When the plug obstructing the crater was removed by the initial Vulcanian blast, the two strainmeter signals showed opposite sign, compatible with a depressurizing source at ~1.5 km depth, at the junction between the intermediate and shallow feeding system inferred by previous studies. The sudden depressurization accompanying the Vulcanian blast caused an oscillation of the source composed by three cycles of about 20 sec each with a decreasing amplitude, as well recorded by the strainmeters. The visible effect of this behaviour was the initial Vulcanian blast and a 2-3 km high eruptive column followed by two lava fountainings displaying decreasing intensity and height. To our knowledge, this is the first time that such a behaviour was observed on an open conduit volcano.

  7. Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.

    Science.gov (United States)

    Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena

    2014-11-01

    A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.

  8. Design of motion adjusting system for space camera based on ultrasonic motor

    Science.gov (United States)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  9. Infrared On-Orbit RCC Inspection With the EVA IR Camera: Development of Flight Hardware From a COTS System

    Science.gov (United States)

    Gazanik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Jenkins, Rusty; Yates, Rusty; Stephan, Ryan; hide

    2005-01-01

    In November 2004, NASA's Space Shuttle Program approved the development of the Extravehicular (EVA) Infrared (IR) Camera to test the application of infrared thermography to on-orbit reinforced carbon-carbon (RCC) damage detection. A multi-center team composed of members from NASA's Johnson Space Center (JSC), Langley Research Center (LaRC), and Goddard Space Flight Center (GSFC) was formed to develop the camera system and plan a flight test. The initial development schedule called for the delivery of the system in time to support STS-115 in late 2005. At the request of Shuttle Program managers and the flight crews, the team accelerated its schedule and delivered a certified EVA IR Camera system in time to support STS-114 in July 2005 as a contingency. The development of the camera system, led by LaRC, was based on the Commercial-Off-the-Shelf (COTS) FLIR S65 handheld infrared camera. An assessment of the S65 system in regards to space-flight operation was critical to the project. This paper discusses the space-flight assessment and describes the significant modifications required for EVA use by the astronaut crew. The on-orbit inspection technique will be demonstrated during the third EVA of STS-121 in September 2005 by imaging damaged RCC samples mounted in a box in the Shuttle's cargo bay.

  10. Airborne Network Camera Standard

    Science.gov (United States)

    2015-06-01

    Optical Systems Group Document 466-15 AIRBORNE NETWORK CAMERA STANDARD DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE...Airborne Network Camera Standard 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...without the focus of standardization for interoperable command and control, storage, and data streaming has been the airborne network camera systems used

  11. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    Science.gov (United States)

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  12. Digital image processing for the rectification of television camera distortions.

    Science.gov (United States)

    Rindfleisch, T. C.

    1971-01-01

    All television systems introduce distortions into the imagery they record which influence the results of quantitative photometric and geometric measurements. Digital computer techniques provide a powerful approach to the calibration and rectification of these systematic effects. Nonlinear as well as linear problems can be attacked with flexibility and precision. Methods which have been developed and applied for the removal of structured system noises and the correction of photometric, geometric, and resolution distortions in vidicon systems are briefly described. Examples are given of results derived primarily from the Mariner Mars 1969 television experiment.

  13. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  14. System Configuration and Operation Plan of Hayabusa2 DCAM3-D Camera System for Scientific Observation During SCI Impact Experiment

    Science.gov (United States)

    Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime

    2017-07-01

    An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.

  15. Electronic Navigational Chart as an Equivalent to Image Produced by Hypercatadioptric Camera System

    Directory of Open Access Journals (Sweden)

    Naus Krzysztof

    2015-01-01

    Full Text Available This paper presents a dynamic hyperboloidal mapping model aimed at building image of electronic navigational chart which constitutes an equivalent to that obtained from a hypercatadioptric camera system. In the 1st part, space and three reference frames located in it are defined. These are: the observer frame and horizontal topocentric frame considered secondary (both connected with water-surface platform, and the geocentric frame, primary one. The 2nd part provides description of a way of interconnection between the observer frame and horizontal topocentric one as well as of determination of their location in relation to the geocentric reference frame, depending on course and position of water-surface platform. In the final part is presented a model of panoramic image mapping in the observer reference frame and principles of generating ENC image by using dynamic hyperboloidal mapping. Finally, conclusions indicating possible applications of the developed model are presented.

  16. Selecting among competing models of electro-optic, infrared camera system range performance

    Science.gov (United States)

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  17. Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Directory of Open Access Journals (Sweden)

    Alexander Wendel

    2017-10-01

    Full Text Available Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera’s 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera’s pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m/1.05 ∘ and 0.18 m/2.39 ∘ . We also propose several approaches to displaying and interpreting the 6D results in a human readable way.

  18. Samba: a real-time motion capture system using wireless camera sensor networks.

    Science.gov (United States)

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  19. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hyeongseok Oh

    2014-03-01

    Full Text Available There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject’s body. The performance of the motion capture system is evaluated extensively in experiments.

  20. Camera system considerations for geomorphic applications of SfM photogrammetry

    Science.gov (United States)

    Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John

    2017-01-01

    The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights

  1. Monitoring system for isolated limb perfusion based on a portable gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Orero, A.; Muxi, A.; Rubi, S.; Duch, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Vidal-Sicart, S.; Pons, F. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); Red Tematica de Investigacion Cooperativa en Cancer (RTICC), Barcelona (Spain); Roe, N. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); Rull, R. [Servei de Cirurgia, Hospital Clinic, Barcelona (Spain); Pavon, N. [Inst. de Fisica Corpuscular, CSIC - UV, Valencia (Spain); Pavia, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain)

    2009-07-01

    Background: The treatment of malignant melanoma or sarcomas on a limb using extremity perfusion with tumour necrosis factor (TNF-{alpha}) and melphalan can result in a high degree of systemic toxicity if there is any leakage from the isolated blood territory of the limb into the systemic vascular territory. Leakage is currently controlled by using radiotracers and heavy external probes in a procedure that requires continuous manual calculations. The aim of this work was to develop a light, easily transportable system to monitor limb perfusion leakage by controlling systemic blood pool radioactivity with a portable gamma camera adapted for intraoperative use as an external probe, and to initiate its application in the treatment of MM patients. Methods: A special collimator was built for maximal sensitivity. Software for acquisition and data processing in real time was developed. After testing the adequacy of the system, it was used to monitor limb perfusion leakage in 16 patients with malignant melanoma to be treated with perfusion of TNF-{alpha} and melphalan. Results: The field of view of the detector system was 13.8 cm, which is appropriate for the monitoring, since the area to be controlled was the precordial zone. The sensitivity of the system was 257 cps/MBq. When the percentage of leakage reaches 10% the associated absolute error is {+-}1%. After a mean follow-up period of 12 months, no patients have shown any significant or lasting side-effects. Partial or complete remission of lesions was seen in 9 out of 16 patients (56%) after HILP with TNF-{alpha} and melphalan. Conclusion: The detector system together with specially developed software provides a suitable automatic continuous monitoring system of any leakage that may occur during limb perfusion. This technique has been successfully implemented in patients for whom perfusion with TNF-{alpha} and melphalan has been indicated. (orig.)

  2. The VISTA infrared camera

    Science.gov (United States)

    Dalton, G. B.; Caldwell, M.; Ward, A. K.; Whalley, M. S.; Woodhouse, G.; Edeson, R. L.; Clark, P.; Beard, S. M.; Gallie, A. M.; Todd, S. P.; Strachan, J. M. D.; Bezawada, N. N.; Sutherland, W. J.; Emerson, J. P.

    2006-06-01

    We describe the integration and test phase of the construction of the VISTA Infrared Camera, a 64 Megapixel, 1.65 degree field of view 0.9-2.4 micron camera which will soon be operating at the cassegrain focus of the 4m VISTA telescope. The camera incorporates sixteen IR detectors and six CCD detectors which are used to provide autoguiding and wavefront sensing information to the VISTA telescope control system.

  3. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Inho [Institute for Human and Machine Cognition (IHMC), Florida (United States); Oh, Jaesung; Oh, Jun-Ho [Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Inhyeok [NAVER Green Factory, Seongnam (Korea, Republic of)

    2017-06-15

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  4. Laser-induced damage threshold of camera sensors and micro-optoelectromechanical systems

    Science.gov (United States)

    Schwarz, Bastian; Ritt, Gunnar; Koerber, Michael; Eberle, Bernd

    2017-03-01

    The continuous development of laser systems toward more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors, such as complementary metal-oxide-semiconductors (CMOS) and charge-coupled devices. These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, micro-optoelectromechanical systems, such as a digital micromirror device (DMD), are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources, both pulsed lasers and continuous-wave (CW)-lasers are used. The laser-induced damage threshold is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructive device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects, such as persistent dead columns or rows of pixels in the sensor image.

  5. Laser-induced damage threshold of camera sensors and micro-opto-electro-mechanical systems

    Science.gov (United States)

    Schwarz, Bastian; Ritt, Gunnar; Körber, Michael; Eberle, Bernd

    2016-10-01

    The continuous development of laser systems towards more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors such as complementary metal-oxide-semiconductors (CMOS) and charge-coupled devices (CCD). These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, also micro-opto-electro-mechanical systems (MOEMS) like a digital micromirror device (DMD) are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light. The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources both pulsed lasers and continuous-wave (CW) lasers are used. The laser-induced damage threshold (LIDT) is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power. Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructed device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects like persisting dead columns or rows of pixels in the sensor image.

  6. Applying AR technology with a projector-camera system in a history museum

    Science.gov (United States)

    Miyata, Kimiyoshi; Shiroishi, Rina; Inoue, Yuka

    2011-01-01

    In this research, an AR (augmented reality) technology with projector-camera system is proposed for a history museum to provide user-friendly interface and pseudo hands-on exhibition. The proposed system is a desktop application and designed for old Japanese coins to enhance the visitors' interests and motivation to investigate them. The size of the old coins are small to recognize their features and the surface of the coins has fine structures on both sides, so it is meaningful to show the reverse side and enlarged image of the coins to the visitors for enhancing their interest and motivation. The image of the reverse side of the coins is displayed based on the AR technology to reverse the AR marker by the user. The information to augment the coins is projected by using a data projector, and the information is placed nearby the coins. The proposed system contributes to develop an exhibition method based on the combinations of the real artifacts and the AR technology, and demonstrated the flexibility and capability to offer background information relating to the old Japanese coins. However, the accuracy of the detection and tracking of the markers and visitor evaluation survey are required to improve the effectiveness of the system.

  7. Initial operation of the tangential x-ray pinhole camera system for KSTAR plasma

    Science.gov (United States)

    Jang, Siwon; Lee, S. G.; Moon, M. K.; Lim, C. H.; Lee, S. H.; Choe, Wonho; Kstar Team; Korea Advanced Institute Of Science; Technology Team; National Fusion Research Institute Collaboration; Korea Atomic Energy Research Institute Collaboration

    2011-10-01

    The tangential soft x-ray pinhole camera (TXPC), which is a fast, two-dimensional (2-D), soft x-ray imaging system with a toroidal view, has been developed for studying MHD activities and transport in KSTAR plasmas. It consists of 50x50 channels multi-wire proportional counter (MWPC) filled with a gas mixture of 78% Kr, 20% C2H6, and 2% CF4 at atmospheric pressure. It can measure 2-D x-ray emissivity with a high and controllable intrinsic gain (> 104) , high spatial ( 100 kHz) resolution with a 100 MHz DAQ system. They can assist analysis of plasma profile, MHD modes, localization and effects of auxiliary heating and transport phenomena from core to edge. Also, the TXPC employs a duplex multi-wire proportional x-ray (DMPX) detector that combines two MWPCs in series. It will provide simultaneous measurements of plasma x-ray emission in two spectral ranges using the first MWPC as an absorber filter for the second one. The signals of the first and the second MWPC allow providing the fast 2-D measurement of the plasma electron temperature. The TXPC system is installed on KSTAR in 2011, and initial plasma data and an assessment of the system performance are presented.

  8. Installing Snowplow Cameras and Integrating Images into MnDOT's Traveler Information System

    Science.gov (United States)

    2017-10-01

    In 2015 and 2016, the Minnesota Department of Transportation (MnDOT) installed network video dash- and ceiling-mounted cameras on 226 snowplows, approximately one-quarter of MnDOT's total snowplow fleet. The cameras were integrated with the onboard m...

  9. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    Science.gov (United States)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  10. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982

  11. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    Directory of Open Access Journals (Sweden)

    M. Simi

    2012-01-01

    Full Text Available A novel compliant Magnetic Levitation System (MLS for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The head module incorporates two motorized donut-shaped magnets and a miniaturized vision system at the tip. The compliant MLS can exploit the static external magnetic field to induce a smooth bending of the robotic head (0–80°, guaranteeing a wide span tilt motion of the point of view. A nonlinear mathematical model for compliant beam was developed and solved analytically in order to describe and predict the trajectory behaviour of the system for different structural parameters. The entire device is 95 mm long and 12.7 mm in diameter. Use of such a robot in single port or standard multiport laparoscopy could enable a reduction of the number or size of ancillary trocars, or increase the number of working devices that can be deployed, thus paving the way for multiple view point laparoscopy.

  12. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  13. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  14. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker.

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-04-14

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user's gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods.

  15. Development of a compact and high spatial resolution gamma camera system using LaBr 3(Ce)

    Science.gov (United States)

    Yamamoto, Seiichi; Imaizumi, Masao; Shimosegawa, Eku; Hatazawa, Jun

    2010-10-01

    In small animal imaging using a single photon emitting radionuclide, a high spatial resolution gamma camera is required. However, its spatial resolution is limited by the light output of conventional scintillators such as NaI(Tl). We developed and tested a small field-of-view (FOV) gamma camera using a new scintillator, LaBr3(Ce). The LaBr3(Ce) gamma camera consists of a 2 mm thick LaBr3(Ce) scintillator, a 2 in. 8×8 multi-anode position sensitive photomultiplier tube (Hamamatsu H8500), and a personal computer-based data acquisition system. The LaBr3(Ce) scintillator was directly coupled to the PSPMT and was contained in a hermetically shielded and light tight aluminum case. The signals from the PSPMT were gain corrected, weighted summed, and digitized by 100 MHz free running A-D converters in the data acquisition system. The detector part of the gamma camera was encased in a tungsten gamma shield, and a tungsten pinhole collimator was mounted in front of the detector surface. The intrinsic spatial resolution that was measured using a tungsten slit mask was 0.75 mm FWHM, and the energy resolution was 8.9% FWHM for 122 keV gamma photons. We obtained transmission and emission images that demonstrated the high spatial resolution of the gamma camera system. Approximately two years after the fabrication of the detector, the flood image showed significant distortion due to the change in LaBr3(Ce) of its hygroscopic characteristic. These results confirm that the developed LaBr3(Ce) gamma camera is promising for small animal imaging using a low energy single photon emitting radionuclide if the hygroscopic problem of LaBr3(Ce) will be solved.

  16. Small Field of View Scintimammography Gamma Camera Integrated to a Stereotactic Core Biopsy Digital X-ray System

    Energy Technology Data Exchange (ETDEWEB)

    Andrew Weisenberger; Fernando Barbosa; T. D. Green; R. Hoefer; Cynthia Keppel; Brian Kross; Stanislaw Majewski; Vladimir Popov; Randolph Wojcik

    2002-10-01

    A small field of view gamma camera has been developed for integration with a commercial stereotactic core biopsy system. The goal is to develop and implement a dual-modality imaging system utilizing scintimammography and digital radiography to evaluate the reliability of scintimammography in predicting the malignancy of suspected breast lesions from conventional X-ray mammography. The scintimammography gamma camera is a custom-built mini gamma camera with an active area of 5.3 cm /spl times/ 5.3 cm and is based on a 2 /spl times/ 2 array of Hamamatsu R7600-C8 position-sensitive photomultiplier tubes. The spatial resolution of the gamma camera at the collimator surface is < 4 mm full-width at half-maximum and a sensitivity of /spl sim/ 4000 Hz/mCi. The system is also capable of acquiring dynamic scintimammographic data to allow for dynamic uptake studies. Sample images of preliminary clinical results are presented to demonstrate the performance of the system.

  17. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  18. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-08-31

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  19. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  20. New evaluation concept of the Athena WFI camera system by emulation of X-ray DEPFET detectors

    Science.gov (United States)

    Ott, S.; Bähr, A.; Brand, T.; Dauser, T.; Meidinger, N.; Plattner, M.; Stechele, W.

    2016-01-01

    The Wide Field Imager (WFI) is an X-ray camera for the future observatory Athena as the next ESA L-class mission. The signal processing chain of the WFI reaches from the sensing of incoming photons to the telemetry transmission to the spacecraft. Up to now the signal processing chain is verified with measurements of real X-ray sources, thus only limited test scenarios are possible. This paper presents a new concept for evaluating the X-ray camera system. Therefore a new end-to-end evaluation is proposed, which makes use of a programmable real-time emulator of the WFI DEPFET detector system including front-end electronics. With a complete variation of all available input parameters significant characteristics of the camera system can be studied and evaluated. This end-to-end evaluation method is a powerful tool to support the development of the WFI camera setup not only in the early stage, but also to improve characteristics and complex processing algorithms of the WFI when it will be in orbit.

  1. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies.

    Science.gov (United States)

    Buchheit, Martin; Allen, Adam; Poon, Tsz Kit; Modonutti, Mattia; Gregson, Warren; Di Salvo, Valter

    2014-12-01

    Abstract During the past decade substantial development of computer-aided tracking technology has occurred. Therefore, we aimed to provide calibration equations to allow the interchangeability of different tracking technologies used in soccer. Eighty-two highly trained soccer players (U14-U17) were monitored during training and one match. Player activity was collected simultaneously with a semi-automatic multiple-camera (Prozone), local position measurement (LPM) technology (Inmotio) and two global positioning systems (GPSports and VX). Data were analysed with respect to three different field dimensions (small, 14.4 km · h-1) was slightly-to-moderately greater when tracked with Prozone, and accelerations, small-to-very largely greater with LPM. For most of the equations, the typical error of the estimate was of a moderate magnitude. Interchangeability of the different tracking systems is possible with the provided equations, but care is required given their moderate typical error of the estimate.

  2. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    Science.gov (United States)

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN

  3. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-10-01

    Full Text Available Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD, is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor based on the observations of the researchers about the difference between real (live and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR camera-based finger-vein recognition system using convolutional neural network (CNN to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA for dimensionality reduction of feature space and support vector machine (SVM for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared

  4. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  5. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  6. Dual-head gamma camera system for intraoperative localization of radioactive seeds

    Science.gov (United States)

    Arsenali, B.; de Jong, H. W. A. M.; Viergever, M. A.; Dickerscheid, D. B. M.; Beijst, C.; Gilhuijs, K. G. A.

    2015-10-01

    Breast-conserving surgery is a standard option for the treatment of patients with early-stage breast cancer. This form of surgery may result in incomplete excision of the tumor. Iodine-125 labeled titanium seeds are currently used in clinical practice to reduce the number of incomplete excisions. It seems likely that the number of incomplete excisions can be reduced even further if intraoperative information about the location of the radioactive seed is combined with preoperative information about the extent of the tumor. This can be combined if the location of the radioactive seed is established in a world coordinate system that can be linked to the (preoperative) image coordinate system. With this in mind, we propose a radioactive seed localization system which is composed of two static ceiling-suspended gamma camera heads and two parallel-hole collimators. Physical experiments and computer simulations which mimic realistic clinical situations were performed to estimate the localization accuracy (defined as trueness and precision) of the proposed system with respect to collimator-source distance (ranging between 50 cm and 100 cm) and imaging time (ranging between 1 s and 10 s). The goal of the study was to determine whether or not a trueness of 5 mm can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (these specifications were defined by a group of dedicated breast cancer surgeons). The results from the experiments indicate that the location of the radioactive seed can be established with an accuracy of 1.6 mm  ±  0.6 mm if a collimator-source distance of 50 cm and imaging time of 5 s are used (these experiments were performed with a 4.5 cm thick block phantom). Furthermore, the results from the simulations indicate that a trueness of 3.2 mm or less can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (this trueness was achieved for all 14 breast phantoms which

  7. Calibration of gamma camera systems for a multicentre European {sup 123}I-FP-CIT SPECT normal database

    Energy Technology Data Exchange (ETDEWEB)

    Tossici-Bolt, Livia [Southampton Univ. Hospitals NHS Trust, Dept. of Medical Physics and Bioengineering, Southampton (United Kingdom); Dickson, John C. [UCLH NHS Foundation Trust and Univ. College London, Institute of Nuclear Medicine, London (United Kingdom); Sera, Terez [Univ. of Szeged, Dept. of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Nijs, Robin de [Rigshospitalet and Univ. of Copenhagen, Neurobiology Research Unit, Copenhagen (Denmark); Bagnara, Maria Claudia [Az. Ospedaliera Universitaria S. Martino, Medical Physics Unit, Genoa (Italy); Jonsson, Cathrine [Karolinska Univ. Hospital, Dept. of Nuclear Medicine, Medical Physics, Stockholm (Sweden); Scheepers, Egon [Univ. of Amsterdam, Dept. of Nuclear Medicine, Academic Medical Centre, Amsterdam (Netherlands); Zito, Felicia [Fondazione IRCCS Granda, Ospedale Maggiore Policlinico, Dept. of Nuclear Medicine, Milan (Italy); Seese, Anita [Univ. of Leipzig, Dept. of Nuclear Medicine, Leipzig (Germany); Koulibaly, Pierre Malick [Univ. of Nice-Sophia Antipolis, Nuclear Medicine Dept., Centre Antoine Lacassagne, Nice (France); Kapucu, Ozlem L. [Gazi Univ., Faculty of Medicine, Dept. of Nuclear Medicine, Ankara (Turkey); Koole, Michel [Univ. Hospital and K.U. Leuven, Nuclear Medicine, Leuven (Belgium); Raith, Maria [Medical Univ. of Vienna, Dept. of Nuclear Medicine, Vienna (Austria); George, Jean [Univ. Catholique Louvain, Nuclear Medicine Division, Mont-Godinne Medical Center, Mont-Godinne (Belgium); Lonsdale, Markus Nowak [Bispebjerg Univ. Hospital, Dept. of Clinical Physiology and Nuclear Medicine, Copenhagen (Denmark); Muenzing, Wolfgang [Univ. of Munich, Dept. of Nuclear Medicine, Munich (Germany); Tatsch, Klaus [Univ. of Munich, Dept. of Nuclear Medicine, Munich (Germany); Municipal Hospital of Karlsruhe Inc., Dept. of Nuclear Medicine, Karlsruhe (Germany); Varrone, Andrea [Center for Psychiatric Research, Karolinska Inst., Dept. of Clinical Neuroscience, Stockholm (Sweden)

    2011-08-15

    A joint initiative of the European Association of Nuclear Medicine (EANM) Neuroimaging Committee and EANM Research Ltd. aimed to generate a European database of [{sup 123}I]FP-CIT single photon emission computed tomography (SPECT) scans of healthy controls. This study describes the characterization and harmonization of the imaging equipment of the institutions involved. {sup 123}I SPECT images of a striatal phantom filled with striatal to background ratios between 10:1 and 1:1 were acquired on all the gamma cameras with absolute ratios measured from aliquots. The images were reconstructed by a core lab using ordered subset expectation maximization (OSEM) without corrections (NC), with attenuation correction only (AC) and additional scatter and septal penetration correction (ACSC) using the triple energy window method. A quantitative parameter, the simulated specific binding ratio (sSBR), was measured using the ''Southampton'' methodology that accounts for the partial volume effect and compared against the actual values obtained from the aliquots. Camera-specific recovery coefficients were derived from linear regression and the error of the measurements was evaluated using the coefficient of variation (COV). The relationship between measured and actual sSBRs was linear across all systems. Variability was observed between different manufacturers and, to a lesser extent, between cameras of the same type. The NC and AC measurements were found to underestimate systematically the actual sSBRs, while the ACSC measurements resulted in recovery coefficients close to 100% for all cameras (AC range 69-89%, ACSC range 87-116%). The COV improved from 46% (NC) to 32% (AC) and to 14% (ACSC) (p < 0.001). A satisfactory linear response was observed across all cameras. Quantitative measurements depend upon the characteristics of the SPECT systems and their calibration is a necessary prerequisite for data pooling. Together with accounting for partial volume, the

  8. VALIDITY OF ATHLETIC TASK PERFORMANCE MEASURES COLLECTED WITH A SINGLE-CAMERA MOTION ANALYSIS SYSTEM AS COMPARED TO STANDARD CLINICAL MEASUREMENTS.

    Science.gov (United States)

    McPherson, April L; Berry, John D; Bates, Nathanial A; Hewett, Timothy E

    2017-08-01

    Previous investigations of single-camera 3D motion analysis camera systems validity have yielded mixed results for clinical applications. The purpose of the current study was to determine the validity of a single-camera 3D motion analysis system for subject standing height, vertical jump height, and broad jump length. It was hypothesized that single-camera system values would demonstrate high correlation to the values obtained from accepted standard clinical measurements. Experimental in vivo validation study. Twelve subjects (age 20.6 ± 4.9 years) from a cohort that included high school to adult athletes who participate in sports at a recreational or competitive level entered and completed the study. Performance measurements for standing height, vertical jump height, and broad jump length were measured with standard clinical measurements and a single-camera 3D motion system. Single-camera system measurements were significantly different than clinical measures for standing height (p height (p  0.07). The relative performance of subjects was highly correlated between single-camera and clinical measurements (r2 > 0.80). Single-camera measurements lacked precision along the vertical axis of motion, but correlated well with clinically accepted measurements for standing height, broad jump length, and vertical jump height. The single-camera system may be capable of making accurate performance assessments in the horizontal plane, but should be limited to relative assessments along the vertical axis of motion. Additional refinement to increase the data reporting accuracy of the motion system along the vertical axis should be considered before relying on this single-camera 3D motion analysis system over clinical techniques to measure vertical jump and standing broad jump performances. 2b.

  9. Poster - Thur Eve - 18: Characterization of a camera and LED lightbox imaging system for radiochromic film dosimetry.

    Science.gov (United States)

    Alexander, K; Percy, E; Olding, T; Schreiner, L J; Salomons, G

    2012-07-01

    Radiation therapy treatment modalities continue to develop and have become increasingly complex. With this, dose verification and quality assurance (QA) is of great importance to ensure that a prescribed dose is accurately and precisely delivered to a patient. Radiochromic film dosimetry has been adopted as a convenient option for QA, because it is relatively energy independent, is near tissue equivalent, and has high spatial resolution. Unfortunately, it is not always easy to use. In this study, preliminary work towards developing a novel method of imaging radiochromic film is presented. The setup consists of a camera mounted vertically above a lightbox containing red LEDs, interfaced with computer image acquisition software. Imaging results from this system will be compared with imaging performed using an Epson Expression 10000XL scanner (a device in common clinical use). The lightbox imaging technique with camera readout is much faster relative to a flatbed scanner. The film measurements made using the camera are independent of film orientation, and show reduced artifacts, so that there are fewer corrections required compared to the use of flatbed scanners. Optical scatter also appears to be less of an issue with this design than with the flat bed scanner. While further work needs to be done to optimize the lightbox imaging system, the lightbox system shows great promise for a rapid, simple, and orientation independent setup, improving on existing film scanning systems. © 2012 American Association of Physicists in Medicine.

  10. HiRes camera and LIDAR ranging system for the Clementine mission

    Energy Technology Data Exchange (ETDEWEB)

    Ledebuhr, A.G.; Kordas, J.F.; Lewis, I.T. [and others

    1995-04-01

    Lawrence Livermore National Laboratory developed a space-qualified High Resolution (HiRes) imaging LIDAR (Light Detection And Ranging) system for use on the DoD Clementine mission. The Clementine mission provided more than 1.7 million images of the moon, earth, and stars, including the first ever complete systematic surface mapping of the moon from the ultra-violet to near-infrared spectral regions. This article describes the Clementine HiRes/LIDAR system, discusses design goals and preliminary estimates of on-orbit performance, and summarizes lessons learned in building and using the sensor. The LIDAR receiver system consists of a High Resolution (HiRes) imaging channel which incorporates an intensified multi-spectral visible camera combined with a Laser ranging channel which uses an avalanche photo-diode for laser pulse detection and timing. The receiver was bore sighted to a light-weight McDonnell-Douglas diode-pumped ND:YAG laser transmitter that emmitted 1.06 {micro}m wavelength pulses of 200 mJ/pulse and 10 ns pulse-width, The LIDAR receiver uses a common F/9.5 Cassegrain telescope assembly. The optical path of the telescope is split using a color-separating beamsplitter. The imaging channel incorporates a filter wheel assembly which spectrally selects the light which is imaged onto a custom 12 mm gated image intensifier fiber-optically-coupled into a 384 x 276 pixel frame transfer CCD FPA. The image intensifier was spectrally sensitive over the 0.4 to 0.8 {micro}m wavelength region. The six-position filter wheel contained 4 narrow spectral filters, one broadband and one blocking filter. At periselene (400 km) the HiRes/LIDAR imaged a 2.8 km swath width at 20-meter resolution. The LIDAR function detected differential signal return with a 40-meter range accuracy, with a maximum range capability of 640 km, limited by the bit counter in the range return counting clock.

  11. Visual odometry from omnidirectional camera

    OpenAIRE

    Jiří DIVIŠ

    2012-01-01

    We present a system that estimates the motion of a robot relying solely on images from onboard omnidirectional camera (visual odometry). Compared to other visual odometry hardware, ours is unusual in utilizing high resolution, low frame-rate (1 to 3 Hz) omnidirectional camera mounted on a robot that is propelled using continuous tracks. We focus on high precision estimates in scenes, where objects are far away from the camera. This is achieved by utilizing omnidirectional camera that is able ...

  12. Applications of a streak-camera-based imager with simultaneous high space and time resolution

    Science.gov (United States)

    Klick, David I.; Knight, Frederick K.

    1993-01-01

    A high-speed imaging device has been built that is capable of recording several hundred images over a time span of 25 to 400 ns. The imager is based on a streak camera, which provides both spatial and temporal resolution. The system's current angular resolution is 16 X 16 pixels, with a time resolution of 250 ps. It was initially employed to provide 3-D images of objects, in conjunction with a short-pulse (approximately 100 ps) laser. For the 3-D (angle-angle-range) laser radar, the 250 ps time resolution corresponds to a range resolution of 4 cm. In the 3-D system, light from a short-pulse laser (a frequency-doubled, Q-switched, mode-locked Nd:YAG laser operating at a wavelength of 532 nm) flood-illuminates a target of linear dimension approximately 1 m. The returning light from the target is imaged, and the image is dissected by a 16 X 16 array of optical fibers. At the other end of the fiber optic image converter, the 256 fibers form a vertical line array, which is input to the slit of a streak camera. The streak camera sweeps the input line across the output phosphor screen so that horizontal position is directly proportional to time. The resulting 2-D image (fiber location vs. time) at the phosphor is read by an intensified (SIT) vidicon TV tube, and the image is digitized and stored. A computer subsequently decodes the image, unscrambling the linear pixels into an angle-angle image at each time or range bin. We are left with a series of snapshots, each one depicting the portion of target surface in a given range bin. The pictures can be combined to form a 3-D realization of the target. Continuous recording of many images over a short time span is of use in imaging other transient phenomena. These applications share a need for multiple images from a nonrepeatable transient event of time duration on the order of nanoseconds. Applications discussed for the imager include (1) pulsed laser beam diagnostics -- measuring laser beam spatial and temporal structure, (2

  13. Tests of a new CCD-camera based neutron radiography detector system at the reactor stations in Munich and Vienna

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, E.; Pleinert, H. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Schillinger, B. [Technische Univ. Muenchen (Germany); Koerner, S. [Atominstitut der Oesterreichischen Universitaeten, Vienna (Austria)

    1997-09-01

    The performance of the new neutron radiography detector designed at PSI with a cooled high sensitive CCD-camera was investigated under real neutronic conditions at three beam ports of two reactor stations. Different converter screens were applied for which the sensitivity and the modulation transfer function (MTF) could be obtained. The results are very encouraging concerning the utilization of this detector system as standard tool at the radiography stations at the spallation source SINQ. (author) 3 figs., 5 refs.

  14. OCAM2S: an integral shutter ultrafast and low noise wavefront sensor camera for laser guide stars adaptive optics systems

    Science.gov (United States)

    Gach, Jean-Luc; Feautrier, Philippe; Balard, Philippe; Guillaume, Christian; Stadler, Eric

    2014-07-01

    To date, the OCAM2 system has demonstrated to be the fastest and lowest noise production ready wavefront sensor, achieving 2067 full frames per second with subelectron readout noise. This makes OCAM2 the ideal system for natural as well as continuous wave laser guide star wavefront sensing. In this paper we present the new gated version of OCAM2 named OCAM2-S, using E2V's CCD219 sensor with integral shutter. This new camera offers the same superb characteristics than OCAM2 both in terms of speed and readout noise but also offers a shutter function that makes the sensor only sensitive to light for very short periods, at will. We will report on gating time and extinction ratio performances of this new camera. This device opens new possibilities for Rayleigh pulsed lasers adaptive optics systems. With a shutter time constant well below 1 microsecond, this camera opens new solutions for pulsed sodium lasers with backscatter suppression or even spot elongation minimization for ELT LGS.

  15. Accuracy map of an optical motion capture system with 42 or 21 cameras in a large measurement volume.

    Science.gov (United States)

    Aurand, Alexander M; Dufour, Jonathan S; Marras, William S

    2017-06-14

    Optical motion capture is commonly used in biomechanics to measure human kinematics. However, no studies have yet examined the accuracy of optical motion capture in a large capture volume (>100m 3 ), or how accuracy varies from the center to the extreme edges of the capture volume. This study measured the dynamic 3D errors of an optical motion capture system composed of 42 OptiTrack Prime 41 cameras (capture volume of 135m 3 ) by comparing the motion of a single marker to the motion reported by a ThorLabs linear motion stage. After spline interpolating the data, it was found that 97% of the capture area had error below 200μm. When the same analysis was performed using only half (21) of the cameras, 91% of the capture area was below 200μm of error. The only locations that exceeded this threshold were at the extreme edges of the capture area, and no location had a mean error exceeding 1mm. When measuring human kinematics with skin-mounted markers, uncertainty of marker placement relative to underlying skeletal features and soft tissue artifact produce errors that are orders of magnitude larger than the errors attributed to the camera system itself. Therefore, the accuracy of this OptiTrack optical motion capture system was found to be more than sufficient for measuring full-body human kinematics with skin-mounted markers in a large capture volume (>100m 3 ). Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Mars Observer Camera

    OpenAIRE

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; J. Veverka(Massachusetts Institute of Technology, Cambridge, U.S.A.); Ravine, M. A.; Soulanille, T.A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the “push broom” technique; that is, they do not take “frames” but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope f...

  17. THE DEVELOPMENT OF A FAMILY OF LIGHTWEIGHT AND WIDE SWATH UAV CAMERA SYSTEMS AROUND AN INNOVATIVE DUAL-SENSOR ON-SINGLE-CHIP DETECTOR

    Directory of Open Access Journals (Sweden)

    B. Delauré

    2013-08-01

    Full Text Available Together with a Belgian industrial consortium VITO has developed the lightweight camera system MEDUSA. It combines high spatial resolution with a wide swath to support missions for large scale mapping and disaster monitoring applications. MEDUSA has been designed to be operated on a solar-powered unmanned aerial vehicle flying in the stratosphere. The camera system contains a custom designed CMOS imager with 2 sensors (each having 10000 × 1200 pixels on 1 chip. One sensor is panchromatic, one is equipped with colour filters. The MEDUSA flight model camera has passed an extensive test campaign and is ready to conduct its maiden flight. First airborne test flights with an engineering model version of the camera have been executed to validate the functionality and the performance of the camera. An image stitching work flow has been developed in order to generate an image composite in near real time of the acquired images. The unique properties of the dual-sensor-on-single-chip detector triggered the development of 2 new camera designs which are currently in preparation. MEDUSA-low is a modified camera system optimised for compatibility with more conventional UAV systems with a payload capacity of 5–10 kg flying at an altitude around 1 km. Its camera acquires both panchromatic and colour images. The MEDUSA geospectral camera is an innovative hyperspectral imager which is equipped with a spatially varying spectral filter installed in front of one of the two sensors. It acquires both hyperspectral and broad band high spatial resolution image data from one and the same camera.

  18. Camera characterization using back-propagation artificial neutral network based on Munsell system

    Science.gov (United States)

    Liu, Ye; Yu, Hongfei; Shi, Junsheng

    2008-02-01

    The camera output RGB signals do not directly corresponded to the tristimulus values based on the CIE standard colorimetric observer, i.e., it is a device-independent color space. For achieving accurate color information, we need to do color characterization, which can be used to derive a transformation between camera RGB values and CIE XYZ values. In this paper we set up a Back-Propagation (BP) artificial neutral network to realize the mapping from camera RGB to CIE XYZ. We used the Munsell Book of Color with total number 1267 as color samples. Each patch of the Munsell Book of Color was recorded by camera, and the RGB values could be obtained. The Munsell Book of Color were taken in a light booth and the surround was kept dark. The viewing/illuminating geometry was 0/45 using D 65 illuminate. The lighting illuminating the reference target needs to be as uniform as possible. The BP network was a 5-layer one and (3-10-10-10-3), which was selected through our experiments. 1000 training samples were selected randomly from the 1267 samples, and the rest 267 samples were as the testing samples. Experimental results show that the mean color difference between the reproduced colors and target colors is 0.5 CIELAB color-difference unit, which was smaller than the biggest acceptable color difference 2 CIELAB color-difference unit. The results satisfy some applications for the more accurate color measurements, such as medical diagnostics, cosmetics production, the color reappearance of different media, etc.

  19. OCAM2S: an integral shutter ultrafast and low noise wavefront sensor camera for laser guide stars adaptive optics systems

    OpenAIRE

    Gach, Jean-Luc; Feautrier, Philippe; Buey, Tristan; Rousset, Gerard; Gendron, Eric; Morris, Tim(School of Physics and Astronomy, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.); Basden, Alastair; Myers, Richard; Vidal, Fabrice; Chemla, Fanny

    2015-01-01

      To date, the OCAM2 system has demonstrated to be the fastest and lowest noise production ready wavefront sensor, achieving 2067 full frames per second with subelectron readout noise. This makes OCAM2 the ideal system for natural as well as continuous wave laser guide star wavefront sensing. In this paper we present the new gated version of OCAM2 named OCAM2-S, using E2V’s CCD219 sensor with integral shutter. This new camera offers the same superb characteristics than OCAM2 both i...

  20. Fast Data Acquisition in Heavy Ion CT Using Intensifying Screen—EMCCD Camera System With Beam Intensity Monitor

    Science.gov (United States)

    Muraishi, Hiroshi; Abe, Shinji; Satoh, Hitoshi; Hara, Hidetake; Mogaki, Tatsuya; Hara, Satoshi; Miyake, Shoko; Watanabe, Yusuke; Koba, Yusuke

    2012-10-01

    We investigated the feasibility of fast data acquisition in heavy ion CT (IonCT) technique with an X-ray intensifying screen-charged coupled device (CCD) camera system. This technique is based on measuring the residual range distribution of heavy ions after passing through an object. We took a large number of images with a CCD camera for one projection by changing the range shifter (RS) thickness to obtain a characteristic curve similar to a Bragg curve and then to estimate the relative residual range. We used a high quality Electron Multiplying CCD (EMCCD) camera, which drastically reduced data acquisition time. We also used a parallel-plate ionization chamber upstream of an object to monitor the time variation in heavy ion beam intensity from a synchrotron accelerator and to perform beam intensity correction for all EMCCD images. Experiments were conducted using a broad beam of 12C, which was generated by spreading out the pencil beam accelerated up to 400 MeV/u by the Heavy Ion Medical Accelerator, in Chiba (HIMAC) at the National Institute of Radiological Sciences, with a scatterer. We demonstrated that a fast CT data acquisition, 14 min for 256 projections, is possible for an electron density phantom, consisting of six rods with a relative electron density resolution of 0.017, using the proposed technique with HIMAC.

  1. Stereoscopic camera and viewing systems with undistorted depth presentation and reduced or eliminated erroneous acceleration and deceleration perceptions, or with perceptions produced or enhanced for special effects

    Science.gov (United States)

    Diner, Daniel B. (Inventor)

    1991-01-01

    Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.

  2. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  3. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    Science.gov (United States)

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  4. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  5. Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras

    Directory of Open Access Journals (Sweden)

    Xiaoqin Wang

    2014-12-01

    Full Text Available We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses.

  6. Strategy for the Development of a Smart NDVI Camera System for Outdoor Plant Detection and Agricultural Embedded Systems

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2013-01-01

    Full Text Available The application of (smart cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR and the red channel optical frequency band. Two aligned charge coupled device (CCD chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.

  7. Strategy for the development of a smart NDVI camera system for outdoor plant detection and agricultural embedded systems.

    Science.gov (United States)

    Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe

    2013-01-24

    The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.

  8. The effectiveness of a rearview camera and parking sensor system alone and combined for preventing a collision with an unexpected stationary or moving object.

    Science.gov (United States)

    Kidd, David G; Hagoski, Bradly K; Tucker, Tia G; Chiang, Dean P

    2015-06-01

    This study measured the effectiveness of a parking sensor system, a rearview camera, and a sensor system combined with a camera for preventing a collision with a stationary or moving child-size object in the path of a backing vehicle. An estimated 15,000 people are injured and 210 are killed every year in backover crashes involving light vehicles. Cameras and sensor systems may help prevent these crashes. The sample included 111 drivers (55 men, 56 women), including 16 in the no-technology condition, 32 in the sensor condition, 32 in the camera condition, and 31 in the camera-plus-sensor condition. A stationary or moving child-size object was surreptitiously deployed in the path of participants backing out of a parking stall. A significantly smaller proportion of participants in the camera condition hit the stationary object compared with participants in the no-technology condition; however, this benefit was greatly reduced when the stationary object was partially or completely in the shade. Significantly fewer participants hit the moving object than the stationary object. The percentage of participants in the sensor, camera, and camera-plus-sensor conditions who hit the moving object was not different from the no-technology condition. The camera was the only technology that was effective for preventing collisions with the stationary object. The variation in collision outcomes between the stationary- and moving-object conditions illustrates how the effectiveness of these technologies is dependent on the backing situation. This research can help the selection and development of countermeasures to prevent backovers. © 2014, Insurance Institute for Highway Safety.

  9. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  10. Neutron cameras for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P. [ITER San Diego Joint Work Site, La Jolla, CA (United States)] [and others

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  11. [Photocoagulation guided by fundus camera (new photocoagulation method of the retina with a non-contact and wide field system)].

    Science.gov (United States)

    Fernández-Vigo, J; Macarro, A; Viguera, F J; Calles, C; Usón, J

    2002-10-01

    To design a device that integrates a laser photocoagulator in a fundus camera so that the functions of both are incorporated for their simultaneous use. This system would allow the visualizacion of the fundus during photocoagulation, with a no-contact, no hand-helped lens technique. To test the device we used a fundus camera Fx500 (Kowa) and a diode laser Oculight SLx (Iris Medical). We analysed the physical and power laser parameters, performing a comprehensive control of the device's safety. In order to measure the error in the precision of the laser, we designed a micrometric test to evaluate the directionality of the beam and the focusing system. Finally, we test the size, time of exposure and intensity necessary to obtain an effective impact. With our system, transpupilar photocoagulation fullfils the main safety requirements on laser radiation and illumination in retinoscopy. After diverse adjustments, the laser impacts were placed in the desired retinal areas. The lesions generated in the pig eyes were quite similar to those obtained by conventional techniques and they were time and intensity dependent. Photocoagulation with our sytem is very simple and potentially safe and effective. It may facilitate the photocoagulation process inasmuch as it is more comfortable and user-friendly.

  12. Detection of the dusty torus in AGN with COMIC, the new infrared camera dedicated to the ESO adaptive optics system.

    Science.gov (United States)

    Marco, Olivier

    1997-10-01

    High angular resolution observation has greatly benefitted from adaptive optic systems working in the infrared. The COMIC camera, the second camera dedicated to ADONIS, the ESO 3.60 meter telescope adaptive optics system, allows observation in the spectral range 3-5 micron at the diffraction limit of the telescope. The characterization of the camera at the Meudon and Grenoble laboratories, then its performances determination on the sky in Chile constitute the first part of this dissertation. A new method for evaluating the limiting magnitudes is proposed which takes into account the various contributions to the detectivity loss between laboratory testing and real condition observations. This approach can be transferred to any other case where a priori observing conditions are known ( Strehl ratio or seeing ). Study of the central region of Active Galactic Nuclei (~1 arcsec) requires high angular resolution. In particular, warm and hot dust is emissive in the 1-5 micron spectral region. Thus, adaptive optics observations are well suited to AGN observation. It is predicted that the central engine and its neighboring environment are embedded within an optically thin dusty/molecular torus which may, along some lines of sight, obscure and even fully hide the nuclear emission leading to distinct observational properties for objects supposed to be identical. Observations made with ADONIS & COMIC of NGC7469 and NGC1068 have shown large amounts of dust located in the torus but also mixed with gas from the narrow line emission region. The high angular resolution achieved has allowed the determination of the dust temperature, mass and spatial distribution around the AGN central engine. These results are in agreement with several torus models and could help to constrain them.

  13. An Improved Indoor Positioning System Using RGB-D Cameras and Wireless Networks for Use in Complex Environments.

    Science.gov (United States)

    Duque Domingo, Jaime; Cerrada, Carlos; Valero, Enrique; Cerrada, Jose A

    2017-10-20

    This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps, delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.

  14. ORIS: the Oak Ridge Imaging System program listings. [Nuclear medicine imaging with rectilinear scanner and gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Bell, P. R.; Dougherty, J. M.

    1978-04-01

    The Oak Ridge Imaging System (ORIS) is a general purpose access, storage, processing and display system for nuclear medicine imaging with rectilinear scanner and gamma camera. This volume contains listings of the PDP-8/E version of ORIS Version 2. The system is designed to run under the Digital Equipment Corporation's OS/8 monitor in 16K or more words of core. System and image file mass storage is on RK8E disk; longer-time image file storage is provided on DECtape. Another version of this program exists for use with the RF08 disk, and a more limited version is for DECtape only. This latter version is intended for non-medical imaging.

  15. An Improved Indoor Positioning System Using RGB-D Cameras and Wireless Networks for Use in Complex Environments

    Directory of Open Access Journals (Sweden)

    Jaime Duque Domingo

    2017-10-01

    Full Text Available This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps, delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.

  16. Image Analysis of OSIRIS-REx Touch-And-Go Camera System (TAGCAMS) Thermal Vacuum Test Images

    Science.gov (United States)

    Everett Gordon, Kenneth; Bos, Brent J.

    2017-01-01

    The objective of NASA’s OSIRIS-REx Asteroid Sample Return Mission, which launched in September 2016, is to travel to the near-Earth asteroid 101955 Bennu, survey and map the asteroid, and return a scientifically interesting sample to Earth in 2023. As a part of its suite of integrated sensors, the OSIRIS-REx spacecraft includes a Touch-And-Go Camera System (TAGCAMS). The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, acquisition of the asteroid sample, and confirmation of the asteroid sample stowage in the spacecraft’s Sample Return Capsule (SRC). After first being calibrated at Malin Space Science Systems (MSSS) at the instrument level, the TAGCAMS were then transferred to Lockheed Martin (LM), where they were put through a progressive series of spacecraft-level environmental tests. These tests culminated in a several-week long, spacecraft-level thermal vacuum (TVAC) test during which hundreds of images were recorded. To analyze the images, custom codes were developed using MATLAB R2016a programming software. For analyses of the TAGCAMS dark images, the codes observed the dark current level for each of the images as a function of the camera-head temperature. Results confirm that the detector dark current noise has not increased and follows similar trends to the results measured at the instrument-level by MSSS. This indicates that the electrical performance of the camera system is stable, even after integration with the spacecraft, and will provide imagery with the required signal-to-noise ratio during spaceflight operations. During the TVAC testing, the TAGCAMS were positioned to view optical dot targets suspended in the chamber. Results for the TAGCAMS light images using a centroid analysis on the positions of the optical target holes indicate that the boresight pointing of the two navigation cameras depend on spacecraft temperature, but will not change by more than ten pixels (approximately 2

  17. The development of a virtual camera system for astronaut-rover planetary exploration.

    Science.gov (United States)

    Platt, Donald W; Boy, Guy A

    2012-01-01

    A virtual assistant is being developed for use by astronauts as they use rovers to explore the surface of other planets. This interactive database, called the Virtual Camera (VC), is an interactive database that allows the user to have better situational awareness for exploration. It can be used for training, data analysis and augmentation of actual surface exploration. This paper describes the development efforts and Human-Computer Interaction considerations for implementing a first-generation VC on a tablet mobile computer device. Scenarios for use will be presented. Evaluation and success criteria such as efficiency in terms of processing time and precision situational awareness, learnability, usability, and robustness will also be presented. Initial testing and the impact of HCI design considerations of manipulation and improvement in situational awareness using a prototype VC will be discussed.

  18. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  19. FPGA-Based HD Camera System for the Micropositioning of Biomedical Micro-Objects Using a Contactless Micro-Conveyor

    Directory of Open Access Journals (Sweden)

    Elmar Yusifli

    2017-03-01

    Full Text Available With recent advancements, micro-object contactless conveyers are becoming an essential part of the biomedical sector. They help avoid any infection and damage that can occur due to external contact. In this context, a smart micro-conveyor is devised. It is a Field Programmable Gate Array (FPGA-based system that employs a smart surface for conveyance along with an OmniVision complementary metal-oxide-semiconductor (CMOS HD camera for micro-object position detection and tracking. A specific FPGA-based hardware design and VHSIC (Very High Speed Integrated Circuit Hardware Description Language (VHDL implementation are realized. It is done without employing any Nios processor or System on a Programmable Chip (SOPC builder based Central Processing Unit (CPU core. It keeps the system efficient in terms of resource utilization and power consumption. The micro-object positioning status is captured with an embedded FPGA-based camera driver and it is communicated to the Image Processing, Decision Making and Command (IPDC module. The IPDC is programmed in C++ and can run on a Personal Computer (PC or on any appropriate embedded system. The IPDC decisions are sent back to the FPGA, which pilots the smart surface accordingly. In this way, an automated closed-loop system is employed to convey the micro-object towards a desired location. The devised system architecture and implementation principle is described. Its functionality is also verified. Results have confirmed the proper functionality of the developed system, along with its outperformance compared to other solutions.

  20. Design of a high-bandwidth data recording and quicklook display system for a photon-counting speckle camera

    Science.gov (United States)

    Eichhorn, Guenther; Hege, E. Keith

    1990-08-01

    The computer system described in this paper is designed to capture event data from a photon-counting speckle camera at photon event rates of up to 1 MHz continuously. The display and quicklook computer uses several single board computers (SBC's) to display the photon events in real-time, calculate the centroid of the data for autoguiding of the telescope, and calculate the autocorrelation function. The system is based on the VMEbus architecture. The SBC's operate under the VxWorks real-time operating system. A Sun workstation is used for code development. the SBC's are mostly selected for speed since the computational requirements are very high. Eventually a Sun workstation for near-real-time image processing and image reconstruction will be used to receive quicklook data from the control computer.

  1. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades.

    Science.gov (United States)

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  2. Simulation study on a stationary data acquisition SPECT system with multi-pinhole collimators attached to a triple-head gamma camera system.

    Science.gov (United States)

    Ogawa, Koichi; Ichimura, Yuta

    2014-10-01

    The aim of the study was to develop a new SPECT system that makes it possible to acquire projection data stationary using a triple-head gamma camera system. We evaluated several data acquisition geometry with multi-pinhole collimators attached to a triple-head gamma camera system. The number of pinholes for each camera was three to twelve, and we located these holes on collimator plates adequately. These collimator holes were tilted by predefined angles to efficiently cover the field of view of the data acquisition system. Acquired data were reconstructed with the OS-EM method. In the simulations, we used a three-dimensional point source phantom, brain phantom, and myocardial phantom. Attenuation correction was conducted with the x-ray CT image of the corresponding slice. Reconstructed images of the point source phantom showed that the spatial resolution could be improved with the small number of pinholes. On the other hand, reconstructed images of the brain phantom showed that the large number of pinholes yielded images with less artifact. The results of the simulations with the myocardial phantom showed that more than eight pinholes could yield an accurate distribution of activity when the source was distributed only in the myocardium. The results of the simulations confirmed that more than eight pinholes for each detector were required to reconstruct an artifact free image in the triple-head SPECT system for imaging of brain and myocardium.

  3. SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, S; Uesaka, M [The University of Tokyo, Tokyo (Japan); Nishio, T; Tsuneda, M [Hiroshima University, Hiroshima (Japan); Matsushita, K [Rikkyo University, Tokyo (Japan); Kabuki, S [Tokai University, Isehara (Japan)

    2016-06-15

    Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value of the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.

  4. Radiation-Tolerant High-Speed Camera

    Science.gov (United States)

    2017-03-01

    Radiation -Tolerant High-Speed Camera Esko Mikkola, Andrew Levy, Matt Engelman Alphacore, Inc. Tempe, AZ 85281 Abstract: As part of an... radiation -hardened CMOS image sensor and camera system. Radiation -hardened cameras with frame rates as high as 10 kfps and resolution of 1Mpixel are not...camera solution that is under development with a similar architecture. It also includes a brief description of the radiation -hardened camera that

  5. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  6. Development of an Automated System to Test and Select CCDs for the Dark Energy Survey Camera (DECam)

    Science.gov (United States)

    Kubik, Donna; Dark Energy Survey Collaboration

    2009-01-01

    The Dark Energy Survey (DES) is a next generation sky survey aimed directly at understanding why the universe is expanding at an accelerating rate. The survey will use the Dark Energy Camera (DECam), a 3 square degree, 500 Megapixel mosaic camera mounted at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory, to observe 5000 square-degrees of sky through 5 filters (g, r, i, z, Y). DECam will be comprised of 74 CCDs: 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The goal of the DES is to provide a factor of 3-5 improvement in the Dark Energy Task Force Figure of Merit using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type IA supernovae. This goal sets stringent technical requirements for the CCDs. Testing a large number of CCDs to determine which best meet the DES requirements would be a very time-consuming manual task. We have developed a system to automatically collect and analyze CCD test data. The test results are entered into an online SQL database which facilitates selection of those CCDs that best meet the technical specifications for charge transfer efficiency, linearity, full well, quantum efficiency, noise, dark current, cross talk, diffusion, and cosmetics.

  7. Effect of pit and fissure sealants on caries detection by a fluorescent camera system.

    Science.gov (United States)

    Markowitz, Kenneth; Rosenfeld, Dalia; Peikes, Daniel; Guzy, Gerald; Rosivack, Glenn

    2013-07-01

    The aim of this study was to evaluate the effect of sealant placement on the detection of caries by a fluorescent camera (FC), the Spectra caries detector. In a laboratory study, FC images and readings were obtained from 31 extracted teeth, before and following application of clear sealants (Shofu Clear or Delton unfilled), or opaque sealants (3M Clinpro or Delton FS). Teeth were then sectioned and examined for enamel or dentine caries. Using each tooth's true caries diagnosis, the sensitivity and specificity of the FC measurements in detecting dentine caries was calculated. In the clinical study, FC readings were obtained from 41 molars in children prior to and following application of clear sealants. Following application of Shofu or Delton unfilled there were reductions in the mean FC readings of 10% (p=0.5) and 8.2% (p=0.009), respectively. Application of two opaque sealants, 3M or Delton FS significantly reduced mean FC readings 16.2% and 20.8% (psealants there was a significant loss of sensitivity in detecting dentinal caries. Clear sealant application caused an insignificant loss of detection sensitivity. Following application of clear sealants to children's molars there was a small (4.01%) but significant (psealants with little loss of sensitivity. Although lesions can be seen through opaque sealants, loss of sensitivity precludes accurate lesion assessment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. The design of visualization telemetry system based on camera module of the commercial smartphone

    Science.gov (United States)

    Wang, Chao; Ye, Zhao; Wu, Bin; Yin, Huan; Cao, Qipeng; Zhu, Jun

    2017-09-01

    Satellite telemetry is the vital indicators to estimate the performance of the satellite. The telemetry data, the threshold range and the variation tendency collected during the whole operational life of the satellite, can guide and evaluate the subsequent design of the satellite in the future. The rotational parts on the satellite (e.g. solar arrays, antennas and oscillating mirrors) affect collecting the solar energy and the other functions of the satellite. Visualization telemetries (pictures, video) are captured to interpret the status of the satellite qualitatively in real time as an important supplement for troubleshooting. The mature technology of commercial off-the-shelf (COTS) products have obvious advantages in terms of the design of construction, electronics, interfaces and image processing. Also considering the weight, power consumption, and cost, it can be directly used in our application or can be adopted for secondary development. In this paper, characteristic simulations of solar arrays radiation in orbit are presented, and a suitable camera module of certain commercial smartphone is adopted after the precise calculation and the product selection process. Considering the advantages of the COTS devices, which can solve both the fundamental and complicated satellite problems, this technique proposed is innovative to the project implementation in the future.

  9. Realization of Vilnius UPXYZVS photometric system for AltaU42 CCD camera at the MAO NAS of Ukraine

    Science.gov (United States)

    Vid'Machenko, A. P.; Andruk, V. M.; Samoylov, V. S.; Delets, O. S.; Nevodovsky, P. V.; Ivashchenko, Yu. M.; Kovalchuk, G. U.

    2005-06-01

    The description of two-inch glass filters of the Vilnius UPXYZVS photometric system, which are made at the Main Astronomical Observatory of NAS of Ukraine for AltaU42 CCD camera with format of 2048×2048 pixels, is presented in the paper. Reaction curves of instrumental system are shown. Estimations of minimal star's magnitudes for each filter's band in comparison with the visual V one are obtained. New software for automation of CCD frames processing is developed in program shell of LINUX/MIDAS/ROMAFOT. It is planned to carry out observations with the purpose to create the catalogue of primary UPXYZVS CCD standards in selected field of the sky for some radio-sources, globular and open clusters, etc. Numerical estimations of astrometric and photometric accuracy are obtained.

  10. The VISTA IR camera

    Science.gov (United States)

    Dalton, Gavin B.; Caldwell, Martin; Ward, Kim; Whalley, Martin S.; Burke, Kevin; Lucas, John M.; Richards, Tony; Ferlet, Marc; Edeson, Ruben L.; Tye, Daniel; Shaughnessy, Bryan M.; Strachan, Mel; Atad-Ettedgui, Eli; Leclerc, Melanie R.; Gallie, Angus; Bezawada, Nagaraja N.; Clark, Paul; Bissonauth, Nirmal; Luke, Peter; Dipper, Nigel A.; Berry, Paul; Sutherland, Will; Emerson, Jim

    2004-09-01

    The VISTA IR Camera has now completed its detailed design phase and is on schedule for delivery to ESO"s Cerro Paranal Observatory in 2006. The camera consists of 16 Raytheon VIRGO 2048x2048 HgCdTe arrays in a sparse focal plane sampling a 1.65 degree field of view. A 1.4m diameter filter wheel provides slots for 7 distinct science filters, each comprising 16 individual filter panes. The camera also provides autoguiding and curvature sensing information for the VISTA telescope, and relies on tight tolerancing to meet the demanding requirements of the f/1 telescope design. The VISTA IR camera is unusual in that it contains no cold pupil-stop, but rather relies on a series of nested cold baffles to constrain the light reaching the focal plane to the science beam. In this paper we present a complete overview of the status of the final IR Camera design, its interaction with the VISTA telescope, and a summary of the predicted performance of the system.

  11. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  12. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  13. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  14. EDUCATING THE PEOPLE AS A DIGITAL PHOTOGRAPHER AND CAMERA OPERATOR VIA OPEN EDUCATION SYSTEM STUDIES FROM TURKEY: Anadolu University Open Education Faculty Case

    Directory of Open Access Journals (Sweden)

    Huseyin ERYILMAZ

    2010-04-01

    Full Text Available Today, Photography and visual arts are very important in our modern life. Especially for the mass communication, the visual images and visual arts have very big importance. In modern societies, people must have knowledge about the visual things, such as photographs, cartoons, drawings, typography, etc. Briefly, the people need education on visual literacy.In today’s world, most of the people in the world have a digital camera for photography or video image. But it is not possible to give people, visual literacy education in classic school system. But the camera users need a teaching medium for using their cameras effectively. So they are trying to use internet opportunities, some internet websites and pages as an information source. But as the well known problem, not all the websites give the correct learning or know-how on internet. There are a lot of mistakes and false information. Because of the reasons given above, Anadolu University Open Education Faculty is starting a new education system to educate people as a digital photographer and camera person in 2009. This program has very importance as a case study. The language of photography and digital technology is in English. Of course, not all the camera users understand English language. So, owing to this program, most of the camera users and especially people who is working as an operator in studios will learn a lot of things on photography, digital technology and camera systems. On the other hand, these people will learn about composition, visual image's history etc. Because of these reasons, this program is very important especially for developing countries. This paper will try to discuss this subject.

  15. Body worn camera

    Science.gov (United States)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  16. The simulated spectrum of the OGRE X-ray EM-CCD camera system

    Science.gov (United States)

    Lewis, M.; Soman, M.; Holland, A.; Lumb, D.; Tutt, J.; McEntaffer, R.; Schultz, T.; Holland, K.

    2017-12-01

    The X-ray astronomical telescopes in use today, such as Chandra and XMM-Newton, use X-ray grating spectrometers to probe the high energy physics of the Universe. These instruments typically use reflective optics for focussing onto gratings that disperse incident X-rays across a detector, often a Charge-Coupled Device (CCD). The X-ray energy is determined from the position that it was detected on the CCD. Improved technology for the next generation of X-ray grating spectrometers has been developed and will be tested on a sounding rocket experiment known as the Off-plane Grating Rocket Experiment (OGRE). OGRE aims to capture the highest resolution soft X-ray spectrum of Capella, a well-known astronomical X-ray source, during an observation period lasting between 3 and 6 minutes whilst proving the performance and suitability of three key components. These three components consist of a telescope made from silicon mirrors, gold coated silicon X-ray diffraction gratings and a camera that comprises of four Electron-Multiplying (EM)-CCDs that will be arranged to observe the soft X-rays dispersed by the gratings. EM-CCDs have an architecture similar to standard CCDs, with the addition of an EM gain register where the electron signal is amplified so that the effective signal-to-noise ratio of the imager is improved. The devices also have incredibly favourable Quantum Efficiency values for detecting soft X-ray photons. On OGRE, this improved detector performance allows for easier identification of low energy X-rays and fast readouts due to the amplified signal charge making readout noise almost negligible. A simulation that applies the OGRE instrument performance to the Capella soft X-ray spectrum has been developed that allows the distribution of X-rays onto the EM-CCDs to be predicted. A proposed optical model is also discussed which would enable the missions minimum success criteria's photon count requirement to have a high chance of being met with the shortest possible

  17. Gas piston activity of the Nyiragongo lava lake: First insights from a Stereographic Time-Lapse Camera system

    Science.gov (United States)

    Smets, Benoît; d'Oreye, Nicolas; Kervyn, Matthieu; Kervyn, François

    2017-10-01

    Nyiragongo volcano (D.R. Congo), in the western branch of the East African Rift System, is one of the most active volcanoes on Earth. Its eruptive activity is mainly characterized by the presence of a persistent lava lake in its main crater. As observed at other persistent lava lakes, the Nyiragongo lava lake level exhibits metric vertical variations in the form of minutes-to hour-long cycles, which we infer to be gas piston activity. To study this activity, we developed and tested a Stereographic Time-Lapse Camera (STLC) system, which takes stereo-pairs of photographs of the Nyiragongo crater at regular intervals. Each pair of gas- and steam-free images during daytime allows the production of a 3D point cloud. The comparison of the point clouds provides a measurement of topographic changes related to variations in lava lake level. The processing of a first dataset acquired between 18 and 20 September 2011, at an acquisition rate of 1 pair of images every 2 min, revealed cycles of vertical lava lake level variations reaching up to 3.8 m. Lava lake level variations >0.5 m are detected significantly. They are interpreted to result from gas accumulation and release in the lava lake itself. The limitations of the STLC approach are related to the number of cameras used and the atmospheric masking by steam and volcanic gas in the Nyiragongo crater. The proposed photogrammetric approach could be applied elsewhere or in other disciplines, where frequent topographic changes occur.

  18. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  19. Development of a Gastric Cancer Diagnostic Support System with a Pattern Recognition Method Using a Hyperspectral Camera

    Directory of Open Access Journals (Sweden)

    Hiroyuki Ogihara

    2016-01-01

    Full Text Available Gastric cancer is a completely curable cancer when it can be detected at its early stage. Thus, because early detection of gastric cancer is important, cancer screening by gastroscopy is performed. Recently, the hyperspectral camera (HSC, which can observe gastric cancer at a variety of wavelengths, has received attention as a gastroscope. HSC permits the discerning of the slight color variations of gastric cancer, and we considered its applicability to a gastric cancer diagnostic support system. In this paper, after correcting reflectance to absorb the individual variations in the reflectance of the HSC, a gastric cancer diagnostic support system was designed using the corrected reflectance. In system design, the problems of selecting the optimum wavelength and optimizing the cutoff value of a classifier are solved as a pattern recognition problem by the use of training samples alone. Using the hold-out method with 104 cases of gastric cancer as samples, design and evaluation of the system were independently repeated 30 times. After analyzing the performance in 30 trials, the mean sensitivity was 72.2% and the mean specificity was 98.8%. The results showed that the proposed system was effective in supporting gastric cancer screening.

  20. A GPRS Based Monitoring and Management System for Classification Results of Image by CCD Camera

    Directory of Open Access Journals (Sweden)

    Cheng Kun Guo

    2014-03-01

    Full Text Available Data acquisition plays an important role in the field of modern industry. In many cases, remote data should be transferred to monitor center which is far away from the manufacturing field. This paper presents a remote transmission system for the classification results of images which are got by a CCD sensor. The utilization of web application framework gives this system the advantage of minimum research work on the monitor center. A GPRS DTU treated as an information transmitting terminal was used to transmit a custom format of the classified images. In the system, The Users can use the browser anywhere to log on the website of the system so as to view and manage the experimental stations, sensors and users’ information of the system. The results shows that the system can transmit and management the figures classified by Support Vector Machines (SVM. The figures those use other classified method will be tested in the future.

  1. Investigations of some aspects of the spray process in a single wire arc plasma spray system using high speed camera.

    Science.gov (United States)

    Tiwari, N; Sahasrabudhe, S N; Tak, A K; Barve, D N; Das, A K

    2012-02-01

    A high speed camera has been used to record and analyze the evolution as well as particle behavior in a single wire arc plasma spray torch. Commercially available systems (spray watch, DPV 2000, etc.) focus onto a small area in the spray jet. They are not designed for tracking a single particle from the torch to the substrate. Using high speed camera, individual particles were tracked and their velocities were measured at various distances from the spray torch. Particle velocity information at different distances from the nozzle of the torch is very important to decide correct substrate position for the good quality of coating. The analysis of the images has revealed the details of the process of arc attachment to wire, melting of the wire, and detachment of the molten mass from the tip. Images of the wire and the arc have been recorded for different wire feed rates, gas flow rates, and torch powers, to determine compatible wire feed rates. High speed imaging of particle trajectories has been used for particle velocity determination using time of flight method. It was observed that the ripple in the power supply of the torch leads to large variation of instantaneous power fed to the torch. This affects the velocity of the spray particles generated at different times within one cycle of the ripple. It is shown that the velocity of a spray particle depends on the instantaneous torch power at the time of its generation. This correlation was established by experimental evidence in this paper. Once the particles leave the plasma jet, their forward speeds were found to be more or less invariant beyond 40 mm up to 500 mm from the nozzle exit.

  2. Planetcam: A Visible And Near Infrared Lucky-imaging Camera To Study Planetary Atmospheres And Solar System Objects

    Science.gov (United States)

    Sanchez-Lavega, Agustin; Rojas, J.; Hueso, R.; Perez-Hoyos, S.; de Bilbao, L.; Murga, G.; Ariño, J.; Mendikoa, I.

    2012-10-01

    PlanetCam is a two-channel fast-acquisition and low-noise camera designed for a multispectral study of the atmospheres of the planets (Venus, Mars, Jupiter, Saturn, Uranus and Neptune) and the satellite Titan at high temporal and spatial resolutions simultaneously invisible (0.4-1 μm) and NIR (1-2.5 μm) channels. This is accomplished by means of a dichroic beam splitter that separates both beams directing them into two different detectors. Each detector has filter wheels corresponding to the characteristic absorption bands of each planetary atmosphere. Images are acquired and processed using the “lucky imaging” technique in which several thousand images of the same object are obtained in a short time interval, coregistered and ordered in terms of image quality to reconstruct a high-resolution ideally diffraction limited image of the object. Those images will be also calibrated in terms of intensity and absolute reflectivity. The camera will be tested at the 50.2 cm telescope of the Aula EspaZio Gela (Bilbao) and then commissioned at the 1.05 m at Pic-duMidi Observatory (Franca) and at the 1.23 m telescope at Calar Alto Observatory in Spain. Among the initially planned research targets are: (1) The vertical structure of the clouds and hazes in the planets and their scales of variability; (2) The meteorology, dynamics and global winds and their scales of variability in the planets. PlanetCam is also expected to perform studies of other Solar System and astrophysical objects. Acknowledgments: This work was supported by the Spanish MICIIN project AYA2009-10701 with FEDER funds, by Grupos Gobierno Vasco IT-464-07 and by Universidad País Vasco UPV/EHU through program UFI11/55.

  3. Using a depth-sensing infrared camera system to access and manipulate medical imaging from within the sterile operating field.

    Science.gov (United States)

    Strickland, Matt; Tremaine, Jamie; Brigley, Greg; Law, Calvin

    2013-06-01

    As surgical procedures become increasingly dependent on equipment and imaging, the need for sterile members of the surgical team to have unimpeded access to the nonsterile technology in their operating room (OR) is of growing importance. To our knowledge, our team is the first to use an inexpensive infrared depthsensing camera (a component of the Microsoft Kinect) and software developed inhouse to give surgeons a touchless, gestural interface with which to navigate their picture archiving and communication systems intraoperatively. The system was designed and developed with feedback from surgeons and OR personnel and with consideration of the principles of aseptic technique and gestural controls in mind. Simulation was used for basic validation before trialing in a pilot series of 6 hepatobiliary-pancreatic surgeries. The interface was used extensively in 2 laparoscopic and 4 open procedures. Surgeons primarily used the system for anatomic correlation, real-time comparison of intraoperative ultrasound with preoperative computed tomography and magnetic resonance imaging scans and for teaching residents and fellows. The system worked well in a wide range of lighting conditions and procedures. It led to a perceived increase in the use of intraoperative image consultation. Further research should be focused on investigating the usefulness of touchless gestural interfaces in different types of surgical procedures and its effects on operative time.

  4. Novel Airborne Video Sensors. Super-Resolution Multi-Camera Panoramic Imaging System for UAVs

    National Research Council Canada - National Science Library

    Negahdaripour, Shahriar

    2004-01-01

    ... by computer simulations, with/without supplementary gyro and GPS. How various system parameters impact the achievable precision of panoramic system in 3-D terrain feature localization and UAV motion estimation is determined for the A=0.5-2 KM...

  5. The influence of experience and camera holding on laparoscopic instrument movements measured with the TrEndo tracking system

    NARCIS (Netherlands)

    Chmarra, M. K.; Kolkman, W.; Jansen, F. W.; Grimbergen, C. A.; Dankelman, J.

    2007-01-01

    Background: Eye-hand coordination problems occur during laparoscopy. This study aimed to investigate the difference in instrument movements between the surgeon him- or herself holding the camera and an assistant holding the camera during performance of a laparoscopic task and to check whether

  6. Validation of attenuation correction using transmission truncation compensation with a small field of view dedicated cardiac SPECT camera system.

    Science.gov (United States)

    Noble, Gavin L; Ahlberg, Alan W; Kokkirala, Aravind Rao; Cullom, S James; Bateman, Timothy M; Cyr, Giselle M; Katten, Deborah M; Tadeo, Glenn D; Case, James A; O'Sullivan, David M; Heller, Gary V

    2009-01-01

    Although attenuation correction (AC) has been successfully applied to large field of view (LFOV) cameras, applicability to small field of view (SFOV) cameras is a concern due to truncation. This study compared perfusion images between a LFOV and SFOV camera with truncation compensation, using the same AC solution. Seventy-eight clinically referred patients underwent rest-stress single-photon emission computed tomography (SPECT) using both a SFOV and LFOV camera in a randomized sequence. Blinded images were interpreted by a consensus of three experienced readers. The percentage of normal images for SFOV and LFOV was significantly higher with than without AC (72% vs 44% and 72% vs 49%, both P cameras was better with than without AC (kappa = 0.736 to 0.847 vs 0.545 to 0.774). Correlation for the summed stress score was higher with than without AC (r (2) = 0.892 vs 0.851, both P camera yields similar results to a LFOV camera. The higher interpretive agreement between cameras after attenuation correction suggests that such images are preferable to non-attenuation-corrected images.

  7. Comparison of Near-Infrared Imaging Camera Systems for Intracranial Tumor Detection.

    Science.gov (United States)

    Cho, Steve S; Zeh, Ryan; Pierce, John T; Salinas, Ryan; Singhal, Sunil; Lee, John Y K

    2017-07-24

    Distinguishing neoplasm from normal brain parenchyma intraoperatively is critical for the neurosurgeon. 5-Aminolevulinic acid (5-ALA) has been shown to improve gross total resection and progression-free survival but has limited availability in the USA. Near-infrared (NIR) fluorescence has advantages over visible light fluorescence with greater tissue penetration and reduced background fluorescence. In order to prepare for the increasing number of NIR fluorophores that may be used in molecular imaging trials, we chose to compare a state-of-the-art, neurosurgical microscope (System 1) to one of the commercially available NIR visualization platforms (System 2). Serial dilutions of indocyanine green (ICG) were imaged with both systems in the same environment. Each system's sensitivity and dynamic range for NIR fluorescence were documented and analyzed. In addition, brain tumors from six patients were imaged with both systems and analyzed. In vitro, System 2 demonstrated greater ICG sensitivity and detection range (System 1 1.5-251 μg/l versus System 2 0.99-503 μg/l). Similarly, in vivo, System 2 demonstrated signal-to-background ratio (SBR) of 2.6 ± 0.63 before dura opening, 5.0 ± 1.7 after dura opening, and 6.1 ± 1.9 after tumor exposure. In contrast, System 1 could not easily detect ICG fluorescence prior to dura opening with SBR of 1.2 ± 0.15. After the dura was reflected, SBR increased to 1.4 ± 0.19 and upon exposure of the tumor SBR increased to 1.8 ± 0.26. Dedicated NIR imaging platforms can outperform conventional microscopes in intraoperative NIR detection. Future microscopes with improved NIR detection capabilities could enhance the use of NIR fluorescence to detect neoplasm and improve patient outcome.

  8. High speed television camera system processes photographic film data for digital computer analysis

    Science.gov (United States)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  9. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Toan Minh Hoang

    2017-10-01

    Full Text Available Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road, weather conditions, and illumination (shadows from objects such as cars, trees, and buildings. Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD, and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  10. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    Science.gov (United States)

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods. PMID:29143764

  11. A novel smartphone camera-LED Communication for clinical signal transmission in mHealth-rehabilitation system.

    Science.gov (United States)

    Rachim, Vega Pradana; Jinyoung An; Pham Ngoc Quan; Wan-Young Chung

    2017-07-01

    In this paper, an implementation of mobile-Visible Light Communication (mVLC) technology for clinical data transmission in home-based mobile-health (mHealth) rehabilitation system is introduced. Mobile remote rehabilitation program is the solutions for improving the quality of care of the clinicians to the patients with chronic condition and disabilities. Typically, the program inquires routine exercise which obligate patients to wear wearable electronic sensors for hours in a specific range of time. Thus it motivate us to develop a novel harmless biomedical communicating system since most of the device's protocol was based on RF communication technology which risky for a human body in term of long term usage due to RF exposure and electromagnetic interference (EMI). The proposed system are designed to utilize a visible light as a medium for hazardless-communication between wearable sensors and a mobile interface device (smartphone). Multiple clinical data such as photoplethysmogram (PPG), electrocardiogram (ECG), and respiration signal are transmitted through LED and received by a smartphone camera. Furthermore, a smartphone also used for local interface and data analyzer henceforth sent the data to the cloud for further clinician's supervision.

  12. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.

    Science.gov (United States)

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-10-28

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  13. Fast subsurface fingerprint imaging with full-field optical coherence tomography system equipped with a silicon camera

    Science.gov (United States)

    Auksorius, Egidijus; Boccara, A. Claude

    2017-09-01

    Images recorded below the surface of a finger can have more details and be of higher quality than the conventional surface fingerprint images. This is particularly true when the quality of the surface fingerprints is compromised by, for example, moisture or surface damage. However, there is an unmet need for an inexpensive fingerprint sensor that is able to acquire high-quality images deep below the surface in short time. To this end, we report on a cost-effective full-field optical coherent tomography system comprised of a silicon camera and a powerful near-infrared LED light source. The system, for example, is able to record 1.7 cm×1.7 cm en face images in 0.12 s with the spatial sampling rate of 2116 dots per inch and the sensitivity of 93 dB. We show that the system can be used to image internal fingerprints and sweat ducts with good contrast. Finally, to demonstrate its biometric performance, we acquired subsurface fingerprint images from 240 individual fingers and estimated the equal-error-rate to be ˜0.8%. The developed instrument could also be used in other en face deep-tissue imaging applications because of its high sensitivity, such as in vivo skin imaging.

  14. Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy

    Science.gov (United States)

    Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, legal representative, Alicia (Inventor); Gursel, Yekta (Inventor)

    2012-01-01

    An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.

  15. How to Generate Security Cameras: Towards Defence Generation for Socio-Technical Systems

    NARCIS (Netherlands)

    Gadyatskaya, Olga

    2016-01-01

    Recently security researchers have started to look into automated generation of attack trees from socio-technical system models. The obvious next step in this trend of automated risk analysis is automating the selection of security controls to treat the detected threats. However, the existing

  16. Digital camera self-calibration

    Science.gov (United States)

    Fraser, Clive S.

    Over the 25 years since the introduction of analytical camera self-calibration there has been a revolution in close-range photogrammetric image acquisition systems. High-resolution, large-area 'digital' CCD sensors have all but replaced film cameras. Throughout the period of this transition, self-calibration models have remained essentially unchanged. This paper reviews the application of analytical self-calibration to digital cameras. Computer vision perspectives are touched upon, the quality of self-calibration is discussed, and an overview is given of each of the four main sources of departures from collinearity in CCD cameras. Practical issues are also addressed and experimental results are used to highlight important characteristics of digital camera self-calibration.

  17. Synchrotron radiation protein data collection system using the newly developed Weissenberg camera and imaging plate for crystal structure analysis (abstract)

    Science.gov (United States)

    Sakabe, N.; Nakagawa, A.; Sasaki, K.; Sakabe, K.; Watanabe, N.; Kondo, H.; Shimomura, M.

    1989-07-01

    It has been an earnest desire of protein crystallographers to collect fast, accurate, high resolution diffraction data from protein crystals, preferably with exposure time as short as possible. In order to meet this challenge, a new type of Weissenberg camera has been developed for the recording of diffraction intensity from protein crystals using synchrotron radiation. The BL6A2 line has a plane-bending mirror designed by Y. Sato. The optical bench with triangular tilt-cut Si crystal monochromator was designed by N. Kamiya and was installed in the BL6A2 hutch. The Weissenberg camera was set on the 2θ arm of the optical bench. This camera can be used with Fuji Imaging Plate (IP) as an x-ray detector, and the reading out of the image from the IP is carried out by using BA100. The characterization of this system was carried out using the native crystal of chicken gizzard G-actin DNase I complex and its Yb3+, PCMB, indium, and FMA derivatives. Since these crystals are very sensitive for x rays, the resolution limit of the diffraction was 5 Å with a 4-circle diffractometer on a rotating anode x-ray generator (N. Sakabe et al., J. Biochem. 95, 887. This complex was crystallized in space group P2,2,2, with a=42.0, b=225.3, and c=77.4 Å. The data were collected with this system with the 430-mm radius cassette when Photon Factory was operated at 2.5 GeV and 270 mA and the wavelength λ=1.004 Å was chosen. In order to avoid overlapping of diffraction spots, oscillation angle range and coupling constant (degree/mm) were settled on the basis of simulation patterns of diffraction spots up to the maximum resolution to be measured considering the direction of the crystal axes, wavelength, radius of the camera, and mosaicness of the crystal. When the oscillation axis was a axis, the oscillation angle range was selected at either 10° (1°/mm) or 5° (0.5°/mm) depending on the density of reciprocal lattice points along the incident beam, and typical exposure time in each IP

  18. Comparison of a three-dimensional and two-dimensional camera system for automated measurement of back posture in dairy cows

    NARCIS (Netherlands)

    Viazzi, S.; Bahr, C.; Hertem, van T.; Schlageter-Tello, A.; Romanini, C.E.B.; Halachmi, I.; Lokhorst, C.; Berckmans, D.

    2014-01-01

    In this study, two different computer vision techniques to automatically measure the back posture in dairy cows were tested and evaluated. A two-dimensional and a three-dimensional camera system were used to extract the back posture from walking cows, which is one measurement used by experts to

  19. Time and wavelength-resolved luminescence evaluation of several types of scintillators using streak camera system equipped with pulsed X-ray source

    Czech Academy of Sciences Publication Activity Database

    Furuya, Y.; Yanagida, T.; Fujimoto, Y.; Yokota, Y.; Kamada, K.; Kawaguchi, N.; Ishizu, S.; Uchiyama, K.; Mori, K.; Kitano, K.; Nikl, Martin; Yoshikawa, A.

    2011-01-01

    Roč. 634, č. 1 (2011), s. 59-63 ISSN 0168-9002 Institutional research plan: CEZ:AV0Z10100521 Keywords : streak camera system * scintillator * pulsed X-ray source Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.207, year: 2011

  20. High-temperature dual-band thermal imaging by means of high-speed CMOS camera system

    Science.gov (United States)

    Hauer, W.; Zauner, G.

    2013-03-01

    When measuring rapid temperature change as well as measuring high temperatures (pyrometers reach the limits of their performance very quickly. Thus a novel type of high temperature measurement system using a high-speed camera as a two-color pyrometer is introduced. In addition to the high temporal resolution, ranging between 10 μs - 100 μs, the presented system also allows the determination of the radiation temperature distribution at a very high spatial resolution. The principle of operation including various image processing algorithms and filters is explained by means of a concrete example, where the surface temperature decay of a carbon electrode heated by an electric arc is measured. The measurement results yield a temperature of a hot spot on the contact surface of 3100 K which declines to approx. 1800 K within 105 ms. The spatial distribution of surface temperatures reveal local temperature variations on the contact. These variations might result from surface irregularities, such as protrusions or micro-peaks, due to inhomogeneous evaporation. An error analysis is given, for evaluating the potential accuracy inherent in practical temperature measurements.

  1. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  2. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    Science.gov (United States)

    Majewski, Stanislaw [Morgantown, VA; Umeno, Marc M [Woodinville, WA

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  3. Area X-ray or UV camera system for high-intensity beams

    Science.gov (United States)

    Chapman, Henry N.; Bajt, Sasa; Spiller, Eberhard A.; Hau-Riege, Stefan , Marchesini, Stefano

    2010-03-02

    A system in one embodiment includes a source for directing a beam of radiation at a sample; a multilayer mirror having a face oriented at an angle of less than 90 degrees from an axis of the beam from the source, the mirror reflecting at least a portion of the radiation after the beam encounters a sample; and a pixellated detector for detecting radiation reflected by the mirror. A method in a further embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample; not reflecting at least a majority of the radiation that is not diffracted by the sample; and detecting at least some of the reflected radiation. A method in yet another embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample using a multilayer mirror; and detecting at least some of the reflected radiation.

  4. Evaluation of a gamma camera system for the RITS-6 accelerator using the self-magnetic pinch diode

    Science.gov (United States)

    Webb, Timothy J.; Kiefer, Mark L.; Gignac, Raymond; Baker, Stuart A.

    2015-08-01

    The self-magnetic pinch (SMP) diode is an intense radiographic source fielded on the Radiographic Integrated Test Stand (RITS-6) accelerator at Sandia National Laboratories in Albuquerque, NM. The accelerator is an inductive voltage adder (IVA) that can operate from 2-10 MV with currents up to 160 kA (at 7 MV). The SMP diode consists of an annular cathode separated from a flat anode, holding the bremsstrahlung conversion target, by a vacuum gap. Until recently the primary imaging diagnostic utilized image plates (storage phosphors) which has generally low DQE at these photon energies along with other problems. The benefits of using image plates include a high-dynamic range, good spatial resolution, and ease of use. A scintillator-based X-ray imaging system or "gamma camera" has been fielded in front of RITS and the SMP diode which has been able to provide vastly superior images in terms of signal-to-noise with similar resolution and acceptable dynamic range.

  5. Systems approach to the design of the CCD sensors and camera electronics for the AIA and HMI instruments on solar dynamics observatory

    Science.gov (United States)

    Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.

    2017-11-01

    Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.

  6. OH Planar Laser Induced Fluorescence (PLIF) Measurements for the Study of High Pressure Flames: An Evaluation of a New Laser and a New Camera System

    Science.gov (United States)

    Tedder, Sarah; Hicks, Yolanda

    2012-01-01

    Planar laser induced fluorescence (PLIF) is used by the Combustion Branch at the NASA Glenn Research Center (NASA Glenn) to assess the characteristics of the flowfield produced by aircraft fuel injectors. To improve and expand the capabilities of the PLIF system new equipment was installed. The new capabilities of the modified PLIF system are assessed by collecting OH PLIF in a methane/air flame produced by a flat flame burner. Specifically, the modifications characterized are the addition of an injection seeder to a Nd:YAG laser pumping an optical parametric oscillator (OPO) and the use of a new camera with an interline CCD. OH fluorescence results using the injection seeded OPO laser are compared to results using a Nd:YAG pumped dye laser with ultraviolet extender (UVX). Best settings of the new camera for maximum detection of PLIF signal are reported for the controller gain and microchannel plate (MCP) bracket pulsing. Results are also reported from tests of the Dual Image Feature (DIF) mode of the new camera which allows image pairs to be acquired in rapid succession. This allows acquisition of a PLIF image and a background signal almost simultaneously. Saturation effects in the new camera were also investigated and are reported.

  7. Three-dimensional camera

    Science.gov (United States)

    Bothe, Thorsten; Gesierich, Achim; Legarda-Saenz, Ricardo; Jueptner, Werner P. O.

    2003-05-01

    Industrial- and multimedia applications need cost effective, compact and flexible 3D profiling instruments. In the talk we will show the principle of, applications for and results from a new miniaturized 3-D profiling system for macroscopic scenes. The system uses a compact housing and is usable like a camera with minimum stabilization like a tripod. The system is based on common fringe projection technique. Camera and projector are assembled with parallel optical axes having coplanar projection and imaging plane. Their axes distance is comparable to the human eyes distance altogether giving a complete system of 21x20x11 cm size and allowing to measure high gradient objects like the interior of tubes. The fringe projector uses a LCD which enables fast and flexible pattern projection. Camera and projector have a short focal length and a high system aperture as well as a large depth of focus. Thus, objects can be measured from a shorter distance compared to common systems (e.g. 1 m sized objects in 80 cm distance). Actually, objects with diameters up to 4 m can be profiled because the set-up allows working with completely opened aperture combined with bright lamps giving a big amount of available light and a high Signal to Noise Ratio. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. For measurement we use synthetic wavelengths. The developed algorithms are completely adaptable concerning the users needs of speed and accuracy. The 3D camera is built from low cost components, robust, nearly handheld and delivers insights also into difficult technical objects like tubes and inside volumes. Besides the realized high resolution phase measurement the system calibration is an important task for usability. While calibrating with common photogrammetric models (which are typically used for actual fringe projection systems) problems were found that

  8. New generation of meteorology cameras

    Science.gov (United States)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  9. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  10. The DRAGO gamma camera.

    Science.gov (United States)

    Fiorini, C; Gola, A; Peloso, R; Longoni, A; Lechner, P; Soltau, H; Strüder, L; Ottobrini, L; Martelli, C; Lui, R; Madaschi, L; Belloli, S

    2010-04-01

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm(2), coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated (57)Co source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45 degrees with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  11. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E., E-mail: eduardo.barrera@upm.es [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid (UPM) (Spain); Ruiz, M.; Sanz, D. [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid (UPM) (Spain); Vega, J.; Castro, R. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Juárez, E.; Salvador, R. [Centro de Investigación en Tecnologías Software y Sistemas Multimedia para la Sostenibilidad, Universidad Politécnica de Madrid (UPM) (Spain)

    2014-05-15

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  12. THE HUBBLE WIDE FIELD CAMERA 3 TEST OF SURFACES IN THE OUTER SOLAR SYSTEM: SPECTRAL VARIATION ON KUIPER BELT OBJECTS

    Energy Technology Data Exchange (ETDEWEB)

    Fraser, Wesley C. [Herzberg Institute of Astrophysics, 5071 West Saanich Road Victoria, BC V9E 2E7 (Canada); Brown, Michael E. [California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91101 (United States); Glass, Florian, E-mail: wesley.fraser@nrc.ca [Observatoire de Genve, Universit de Genve, 51 chemin des Maillettes, CH-1290 Sauverny (Switzerland)

    2015-05-01

    Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlated optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes.

  13. Ice crystal characterization in cirrus clouds: a sun-tracking camera system and automated detection algorithm for halo displays

    Directory of Open Access Journals (Sweden)

    L. Forster

    2017-07-01

    Full Text Available Halo displays in the sky contain valuable information about ice crystal shape and orientation: e.g., the 22° halo is produced by randomly oriented hexagonal prisms while parhelia (sundogs indicate oriented plates. HaloCam, a novel sun-tracking camera system for the automated observation of halo displays is presented. An initial visual evaluation of the frequency of halo displays for the ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques field campaign from October to mid-November 2014 showed that sundogs were observed more often than 22° halos. Thus, the majority of halo displays was produced by oriented ice crystals. During the campaign about 27 % of the cirrus clouds produced 22° halos, sundogs or upper tangent arcs. To evaluate the HaloCam observations collected from regular measurements in Munich between January 2014 and June 2016, an automated detection algorithm for 22° halos was developed, which can be extended to other halo types as well. This algorithm detected 22° halos about 2 % of the time for this dataset. The frequency of cirrus clouds during this time period was estimated by co-located ceilometer measurements using temperature thresholds of the cloud base. About 25 % of the detected cirrus clouds occurred together with a 22° halo, which implies that these clouds contained a certain fraction of smooth, hexagonal ice crystals. HaloCam observations complemented by radiative transfer simulations and measurements of aerosol and cirrus cloud optical thickness (AOT and COT provide a possibility to retrieve more detailed information about ice crystal roughness. This paper demonstrates the feasibility of a completely automated method to collect and evaluate a long-term database of halo observations and shows the potential to characterize ice crystal properties.

  14. Gigavision - A weatherproof, multibillion pixel resolution time-lapse camera system for recording and tracking phenology in every plant in a landscape

    Science.gov (United States)

    Brown, T.; Borevitz, J. O.; Zimmermann, C.

    2010-12-01

    We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually

  15. NFC - Narrow Field Camera

    Science.gov (United States)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  16. The measurement of in vivo joint angles during a squat using a single camera markerless motion capture system as compared to a marker based system.

    Science.gov (United States)

    Schmitz, Anne; Ye, Mao; Boggess, Grant; Shapiro, Robert; Yang, Ruigang; Noehren, Brian

    2015-02-01

    Markerless motion capture may have the potential to make motion capture technology widely clinically practical. However, the ability of a single markerless camera system to quantify clinically relevant, lower extremity joint angles has not been studied in vivo. Therefore, the goal of this study was to compare in vivo joint angles calculated using a marker-based motion capture system and a Microsoft Kinect during a squat. Fifteen individuals participated in the study: 8 male, 7 female, height 1.702±0.089m, mass 67.9±10.4kg, age 24±4 years, BMI 23.4±2.2kg/m(2). Marker trajectories and Kinect depth map data of the leg were collected while each subject performed a slow squat motion. Custom code was used to export virtual marker trajectories for the Kinect data. Each set of marker trajectories was utilized to calculate Cardan knee and hip angles. The patterns of motion were similar between systems with average absolute differences of 0.9 for both systems. The peak angles calculated by the marker-based and Kinect systems were largely correlated (r>0.55). These results suggest the data from the Kinect can be post processed in way that it may be a feasible markerless motion capture system that can be used in the clinic. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Structure monitor system by using optical fiber sensor and watching camera in utility tunnel in urban area

    Science.gov (United States)

    Nakano, Masahiro; Torigoe, Toshihiko; Kawano, Masaru

    2011-09-01

    This paper reports the measurement results of the utility tunnel (electric power and communication) according to the adjacent expressway construction. And moreover, the surveillance camera for the monitor is connected to the network, set up in joint premises in this measurement, and it reports on the content observed overall.

  18. a Modified Projective Transformation Scheme for Mosaicking Multi-Camera Imaging System Equipped on a Large Payload Fixed-Wing Uas

    Science.gov (United States)

    Jhan, J. P.; Li, Y. T.; Rau, J. Y.

    2015-03-01

    In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme

  19. Model based scattering correction in time-of-flight cameras.

    Science.gov (United States)

    Schäfer, Henrik; Lenzen, Frank; Garbe, Christoph S

    2014-12-01

    In-camera light scattering is a systematic error of Time-of-Flight depth cameras that significantly reduces the accuracy of the systems. A completely new model is presented, based on raw data calibration and only one additional intrinsic camera parameter. It is shown that the approach effectively removes the errors of in-camera light scattering.

  20. Hand-Camera Coordination Varies over Time in Users of the Argus® II Retinal Prosthesis System

    Directory of Open Access Journals (Sweden)

    Michael P Barry

    2016-05-01

    Full Text Available Introduction:Most visual neuroprostheses use an external camera for image acquisition. This adds two complications to phosphene perception: 1 stimulation locus will not change with eye movements; and 2 external cameras can be aimed in directions different from the user’s intended direction of gaze. Little is known about the stability of where users perceive light sources to be or whether they will adapt to changes in camera orientation. Methods:Three end-stage retinitis pigmentosa patients implanted with the Argus II participated in this study. This prosthesis stimulated the retina based on an 18° x 11° area selected within the camera’s 66° x 49° field of view. The center of the electrode array’s field of view mapped with the camera’s field of view is the camera alignment position (CAP. Proper camera alignments minimize errors in localizing visual percepts in space. Subjects touched single white squares in random locations on a darkened touchscreen 40 or more times. To study adaptation, subjects were given intentional CAP misalignments of 15°–40° for 5–6 months. Subjects performed this test with auditory feedback during (bi-weekly lab sessions. Misaligned CAPs were maintained for another 5–6 months without auditory feedback. Touch alignment was tracked to detect any adaptation. To estimate localization stability, data for when CAPs were set to minimize errors were tracked. The same localization test as above was used. Localization errors were tracked every 1–2 weeks for up to 40 months.Results:Two of three subjects used auditory feedback to improve accuracy with misaligned CAPs at an average rate of 0.02°/day (p < 0.05, bootstrap analysis of linear regression. The rates observed here were ~4000 times slower than those seen in normally-sighted subjects adapting to prism glasses. Removal of auditory feedback precipitated error increases for all subjects.Optimal CAPs varied significantly across test sessions (p < 10−4

  1. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  2. Development of a safe ultraviolet camera system to enhance awareness by showing effects of UV radiation and UV protection of the skin (Conference Presentation)

    Science.gov (United States)

    Verdaasdonk, Rudolf M.; Wedzinga, Rosaline; van Montfrans, Bibi; Stok, Mirte; Klaessens, John; van der Veen, Albert

    2016-03-01

    The significant increase of skin cancer occurring in the western world is attributed to longer sun expose during leisure time. For prevention, people should become aware of the risks of UV light exposure by showing skin damage and the protective effect of sunscreen with an UV camera. An UV awareness imaging system optimized for 365 nm (UV-A) was develop using consumer components being interactive, safe and mobile. A Sony NEX5t camera was adapted to full spectral range. In addition, UV transparent lenses and filters were selected based on spectral characteristics measured (Schott S8612 and Hoya U-340 filters) to obtain the highest contrast for e.g. melanin spots and wrinkles on the skin. For uniform UV illumination, 2 facial tanner units were adapted with UV 365 nm black light fluorescent tubes. Safety of the UV illumination was determined relative to the sun and with absolute irradiance measurements at the working distance. A maximum exposure time over 15 minutes was calculate according the international safety standards. The UV camera was successfully demonstrated during the Dutch National Skin Cancer day and was well received by dermatologists and participating public. Especially, the 'black paint' effect putting sun screen on the face was dramatic and contributed to the awareness of regions on the face what are likely to be missed applying sunscreen. The UV imaging system shows to be promising for diagnostics and clinical studies in dermatology and potentially in other areas (dentistry and ophthalmology)

  3. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  4. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  5. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  6. AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images

    Science.gov (United States)

    Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.

    2017-01-01

    Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.

  7. Compact 3D camera

    Science.gov (United States)

    Bothe, Thorsten; Osten, Wolfgang; Gesierich, Achim; Jueptner, Werner P. O.

    2002-06-01

    A new, miniaturized fringe projection system is presented which has a size and handling that approximates to common 2D cameras. The system is based on the fringe projection technique. A miniaturized fringe projector and camera are assembled into a housing of 21x20x11 cm size with a triangulation basis of 10 cm. The advantage of the small triangulation basis is the possibility to measure difficult objects with high gradients. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. Special hardware issues are a high quality, bright light source (and components to handle the high luminous flux) as well as adapted optics to gain a large aperture angle and a focus scan unit to increase the usable measurement volume. Adaptable synthetic wavelengths and integration times were used to increase the measurement quality and allow robust measurements that are adaptable to the desired speed and accuracy. Algorithms were developed to generate automatic focus positions to completely cover extended measurement volumes. Principles, setup, measurement examples and applications are shown.

  8. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  9. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  10. The effectiveness of detection of splashed particles using a system of three integrated high-speed cameras

    Science.gov (United States)

    Ryżak, Magdalena; Beczek, Michał; Mazur, Rafał; Sochan, Agata; Bieganowski, Andrzej

    2017-04-01

    The phenomenon of splash, which is one of the factors causing erosion of the soil surface, is the subject of research of various scientific teams. One of efficient methods of observation and analysis of this phenomenon are high-speed cameras to measure particles at 2000 frames per second or higher. Analysis of the phenomenon of splash with the use of high-speed cameras and specialized software can reveal, among other things, the number of broken particles, their speeds, trajectories, and the distances over which they were transferred. The paper presents an attempt at evaluation of the efficiency of detection of splashed particles with the use of a set of 3 cameras (Vision Research MIRO 310) and software Dantec Dynamics Studio, using a 3D module (Volumetric PTV). In order to assess the effectiveness of estimating the number of particles, the experiment was performed on glass beads with a diameter of 0.5 mm (corresponding to the sand fraction). Water droplets with a diameter of 4.2 mm fell on a sample from a height of 1.5 m. Two types of splashed particles were observed: particle having a low range (up to 18 mm) splashed at larger angles and particles of a high range (up to 118 mm) splashed at smaller angles. The detection efficiency the number of splashed particles estimated by the software was 45 - 65% for particles with a large range. The effectiveness of the detection of particles by the software has been calculated on the basis of comparison with the number of beads that fell on the adhesive surface around the sample. This work was partly financed from the National Science Centre, Poland; project no. 2014/14/E/ST10/00851.

  11. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  12. High Speed Digital Camera Technology Review

    Science.gov (United States)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  13. INT prime focus mosaic camera

    Science.gov (United States)

    Ives, Derek J.; Tulloch, Simon; Churchill, John

    1996-03-01

    The INT Prime Focus Mosaic Camera (INT PFC) is designed to provide a large field survey and supernovae search capability for the prime focus of the 2.5 m Isaac Newton Telescope (INT). It is a joint collaboration between the Royal Greenwich Observatory (UK), Kapteyn Sterrenwacht Werkgroep (Netherlands), and the Lawrence Berkeley Laboratories (USA). The INT PFC consists of a 4 chip mosaic utilizing thinned and anti-reflection coated CCDs. These are LORAL devices of the LICK3 design. They will be operated cryogenically in a purpose built camera assembly. A fifth CCD, of the same type, is co-mounted with the science array in the cryostat to provide autoguider functions. This cryostat then mounts to the main camera assembly at the prime focus. This assembly will include standard filters and a novel shutter wheel which has been specifically designed for this application. The camera will have an unvignetted field of 40 arcminutes and a focal ratio of f/3.3. This results in a very tight mechanical specification for co-planarity and flatness of the array of CCDs and also quite stringent flexure tolerance of the camera assembly. A method of characterizing the co- planarity and flatness of the array will be described. The overall system architecture will also be described. One of the main requirements is to read the whole array out within 100s, with less than 10e rms. noise and very low CCD cross talk.

  14. A versatile photogrammetric camera automatic calibration suite for multi-spectral fusion and optical helmet tracking

    CSIR Research Space (South Africa)

    De Villiers, J

    2014-05-01

    Full Text Available This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra...

  15. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  16. Infrared Camera Characterization of Bi-Propellant Reaction Control Engines during Auxiliary Propulsion Systems Tests at NASA's White Sands Test Facility in Las Cruces, New Mexico

    Science.gov (United States)

    Holleman, Elizabeth; Sharp, David; Sheller, Richard; Styron, Jason

    2007-01-01

    This paper describes the application of a FUR Systems A40M infrared (IR) digital camera for thermal monitoring of a Liquid Oxygen (LOX) and Ethanol bi-propellant Reaction Control Engine (RCE) during Auxiliary Propulsion System (APS) testing at the National Aeronautics & Space Administration's (NASA) White Sands Test Facility (WSTF) near Las Cruces, New Mexico. Typically, NASA has relied mostly on the use of ThermoCouples (TC) for this type of thermal monitoring due to the variability of constraints required to accurately map rapidly changing temperatures from ambient to glowing hot chamber material. Obtaining accurate real-time temperatures in the JR spectrum is made even more elusive by the changing emissivity of the chamber material as it begins to glow. The parameters evaluated prior to APS testing included: (1) remote operation of the A40M camera using fiber optic Firewire signal sender and receiver units; (2) operation of the camera inside a Pelco explosion proof enclosure with a germanium window; (3) remote analog signal display for real-time monitoring; (4) remote digital data acquisition of the A40M's sensor information using FUR's ThermaCAM Researcher Pro 2.8 software; and (5) overall reliability of the system. An initial characterization report was prepared after the A40M characterization tests at Marshall Space Flight Center (MSFC) to document controlled heat source comparisons to calibrated TCs. Summary IR digital data recorded from WSTF's APS testing is included within this document along with findings, lessons learned, and recommendations for further usage as a monitoring tool for the development of rocket engines.

  17. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  18. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  19. Omnidirectional Underwater Camera Design and Calibration

    Science.gov (United States)

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  20. A quasi-static method for determining the characteristics of a motion capture camera system in a "split-volume" configuration

    Science.gov (United States)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2002-01-01

    The purpose of this study was to determine the accuracy, repeatability and resolution of a six-camera Motion Analysis system in a vertical split-volume configuration using a unique quasi-static methodology. The position of a reflective marker was recorded while it was moved quasi-statically over a range of 2.54 mm (0.100 inches) via a linearly-translating table. The table was placed at five different heights to cover both sub-volumes and the overlapping region. Data analysis showed that accuracy, repeatability and resolution values were consistent across all regions of the split-volume, including the overlapping section.

  1. Streak camera meeting summary

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bliss, David E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.

  2. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  3. An automated SO2 camera system for continuous, real-time monitoring of gas emissions from Kīlauea Volcano's summit Overlook Crater

    Science.gov (United States)

    Kern, Christoph; Sutton, Jeff; Elias, Tamar; Lee, Robert Lopaka; Kamibayashi, Kevan P.; Antolik, Loren; Werner, Cynthia A.

    2015-01-01

    SO2 camera systems allow rapid two-dimensional imaging of sulfur dioxide (SO2) emitted from volcanic vents. Here, we describe the development of an SO2 camera system specifically designed for semi-permanent field installation and continuous use. The integration of innovative but largely “off-the-shelf” components allowed us to assemble a robust and highly customizable instrument capable of continuous, long-term deployment at Kīlauea Volcano's summit Overlook Crater. Recorded imagery is telemetered to the USGS Hawaiian Volcano Observatory (HVO) where a novel automatic retrieval algorithm derives SO2 column densities and emission rates in real-time. Imagery and corresponding emission rates displayed in the HVO operations center and on the internal observatory website provide HVO staff with useful information for assessing the volcano's current activity. The ever-growing archive of continuous imagery and high-resolution emission rates in combination with continuous data from other monitoring techniques provides insight into shallow volcanic processes occurring at the Overlook Crater. An exemplary dataset from September 2013 is discussed in which a variation in the efficiency of shallow circulation and convection, the processes that transport volatile-rich magma to the surface of the summit lava lake, appears to have caused two distinctly different phases of lake activity and degassing. This first successful deployment of an SO2 camera for continuous, real-time volcano monitoring shows how this versatile technique might soon be adapted and applied to monitor SO2 degassing at other volcanoes around the world.

  4. In-flight measurements of propeller blade deformation on a VUT100 cobra aeroplane using a co-rotating camera system

    Science.gov (United States)

    Boden, F.; Stasicki, B.; Szypuła, M.; Ružička, P.; Tvrdik, Z.; Ludwikowski, K.

    2016-07-01

    Knowledge of propeller or rotor blade behaviour under real operating conditions is crucial for optimizing the performance of a propeller or rotor system. A team of researchers, technicians and engineers from Avia Propeller, DLR, EVEKTOR and HARDsoft developed a rotating stereo camera system dedicated to in-flight blade deformation measurements. The whole system, co-rotating with the propeller at its full speed and hence exposed to high centrifugal forces and strong vibration, had been successfully tested on an EVEKTOR VUT 100 COBRA aeroplane in Kunovice (CZ) within the project AIM2—advanced in-flight measurement techniques funded by the European Commission (contract no. 266107). This paper will describe the work, starting from drawing the first sketch of the system up to performing the successful flight test. Apart from a description of the measurement hardware and the applied IPCT method, the paper will give some impressions of the flight test activities and discuss the results obtained from the measurements.

  5. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  6. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  7. Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera

    Science.gov (United States)

    Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul

    2017-09-01

    In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.

  8. Epifauna of the Sea of Japan collected via a new epibenthic sledge equipped with camera and environmental sensor systems

    Science.gov (United States)

    Brandt, A.; Elsner, N.; Brenke, N.; Golovan, O.; Malyutina, M. V.; Riehl, T.; Schwabe, E.; Würzberg, L.

    2013-02-01

    Faunistic data from a newly designed camera-epibenthic sledge (C-EBS) are presented. These were collected during the joint Russian-German expedition SoJaBio (Sea of Japan Biodiversity Studies) on board the R.V. Akademik Lavrentyev from four transects (A-D) between 460 and 3660 m depth. In total, 244,531 macro- and megafaunal individuals were sampled with the classes Malacostraca (80,851 individuals), Polychaeta (36,253 ind.) and Ophiuroidea (34,004 ind.) being most abundant. Within the Malacostraca, Peracarida (75,716 ind.) were most abundant and within these, the Isopoda were the dominant taxon (27,931 ind.), followed by Amphipoda (21,403 ind.), Cumacea (13,971 ind.) and Tanaidacea (10,830 ind.). Mysidacea (1581 ind.) were least frequent. Bivalvia, Amphipoda, Cumacea and Mysidacea as well as inbenthic meiofaunal Nematoda occurred in higher numbers at the shallower stations and their numbers decreased with increasing depth. Polychaeta, Isopoda, and Tanaidacea, on the contrary, increased in abundance with increasing depth. Only one isopod species was sampled at abyssal depths in the Sea of Japan but at very high abundance: Eurycope spinifrons Gurjanova, 1933 (Asellota: Munnopsidae). Echinoderms occurred frequently at the shallower slope stations. Ophiuroids were dominating, followed by holothurians, and echinoids and asteroids which occurred in lower numbers and primarily at the shallower stations of transects A and B. Only 2163 individual anthozoans were recorded and these were mostly confined to the lower slope. The technical design of a new C-EBS is described. Next to temperature-insulated epi- and suprabenthic samplers, it is equipped with still and video cameras, which deliver information on seabed topography and megafaunal occurrence. Furthermore, Aanderaa CTD and SEAGUARD RCM allow for collection of physical parameters, such as near bottom oxygen composition, temperature and conductivity.

  9. The Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, J.; Thiessen, D.; Pourangi, A.; Kobzeff, P.; Litwin, T.; Scherr, L.; Elliott, S.; Dingizian, A.; Maimone, M.

    2012-09-01

    NASA's Mars Science Laboratory (MSL) Rover is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover cameras described in Maki et al. (J. Geophys. Res. 108(E12): 8071, 2003). Images returned from the engineering cameras will be used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The Navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The Hazard Avoidance Cameras (Hazcams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a 1024×1024 pixel detector and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer "A" and the other set is connected to rover computer "B". The Navcams and Front Hazcams each provide similar views from either computer. The Rear Hazcams provide different views from the two computers due to the different mounting locations of the "A" and "B" Rear Hazcams. This paper provides a brief description of the engineering camera properties, the locations of the cameras on the vehicle, and camera usage for surface operations.

  10. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  11. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  12. Uav Cameras: Overview and Geometric Calibration Benchmark

    Science.gov (United States)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  13. SPEIR: A Ge Compton Camera

    Energy Technology Data Exchange (ETDEWEB)

    Mihailescu, L; Vetter, K M; Burks, M T; Hull, E L; Craig, W W

    2004-02-11

    The SPEctroscopic Imager for {gamma}-Rays (SPEIR) is a new concept of a compact {gamma}-ray imaging system of high efficiency and spectroscopic resolution with a 4-{pi} field-of-view. The system behind this concept employs double-sided segmented planar Ge detectors accompanied by the use of list-mode photon reconstruction methods to create a sensitive, compact Compton scatter camera.

  14. VLSI-distributed architectures for smart cameras

    Science.gov (United States)

    Wolf, Wayne H.

    2001-03-01

    Smart cameras use video/image processing algorithms to capture images as objects, not as pixels. This paper describes architectures for smart cameras that take advantage of VLSI to improve the capabilities and performance of smart camera systems. Advances in VLSI technology aid in the development of smart cameras in two ways. First, VLSI allows us to integrate large amounts of processing power and memory along with image sensors. CMOS sensors are rapidly improving in performance, allowing us to integrate sensors, logic, and memory on the same chip. As we become able to build chips with hundreds of millions of transistors, we will be able to include powerful multiprocessors on the same chip as the image sensors. We call these image sensor/multiprocessor systems image processors. Second, VLSI allows us to put a large number of these powerful sensor/processor systems on a single scene. VLSI factories will produce large quantities of these image processors, making it cost-effective to use a large number of them in a single location. Image processors will be networked into distributed cameras that use many sensors as well as the full computational resources of all the available multiprocessors. Multiple cameras make a number of image recognition tasks easier: we can select the best view of an object, eliminate occlusions, and use 3D information to improve the accuracy of object recognition. This paper outlines approaches to distributed camera design: architectures for image processors and distributed cameras; algorithms to run on distributed smart cameras, and applications of which VLSI distributed camera systems.

  15. Outcomes of road traffic injuries before and after the implementation of a camera ticketing system: a retrospective study from a large trauma center in Saudi Arabia.

    Science.gov (United States)

    Alghnam, Suliman; Alkelya, Muhamad; Alfraidy, Moath; Al-Bedah, Khalid; Albabtain, Ibrahim Tawfiq; Alshenqeety, Omar

    2017-01-01

    Road traffic injuries (RTIs) are the third leading cause of death in Saudi Arabia. Because speed is a major risk factor for severe crash-related injuries, a camera ticketing system was implemented countrywide in mid-2010 by the traffic police in an effort to improve traffic safety. There are no published studies on the effects of the system in Saudi Arabia. To examine injury severity and associated mortality at a large trauma center before and after the implementation of the ticketing system. Retrospective, analytical. Trauma center of a tertiary care center in Riyadh. The study included all trauma registry patients seen in the emergency department for a crash-related injury (automobile occupants, pedestrians, or motorcyclists) between January 2005 and December 2014. Associations with outcome measures were assessed by univariate and multivariate methods. Injury severity score (ISS), Glasgow coma scale (GCS) and mortality. The study included all trauma registry patients seen in the emergency department for a crash-related injury. All health outcomes improved in the period following implementation of the ticketing system. Following implementation, ISS scores decreased (-3.1, 95% CI -4.6, -1.6) and GCS increased (0.47, 95% CI 0.08, 0.87) after adjusting for other covariates. The odds of death were 46% lower following implementation than before implementation. When the data were log-transformed to account for skewed data distributions, the results remained statistically significant. This study suggests positive health implications following the implementation of the camera ticketing system. Further investment in public health interventions is warranted to reduce preventable RTIs. The study findings represent a trauma center at a single hospital in Riyadh, which may not generalize to the Saudi population.

  16. A compact single-camera system for high-speed, simultaneous 3-D velocity and temperature measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Louise; Sick, Volker; Frank, Jonathan H.

    2013-09-01

    The University of Michigan and Sandia National Laboratories collaborated on the initial development of a compact single-camera approach for simultaneously measuring 3-D gasphase velocity and temperature fields at high frame rates. A compact diagnostic tool is desired to enable investigations of flows with limited optical access, such as near-wall flows in an internal combustion engine. These in-cylinder flows play a crucial role in improving engine performance. Thermographic phosphors were proposed as flow and temperature tracers to extend the capabilities of a novel, compact 3D velocimetry diagnostic to include high-speed thermometry. Ratiometric measurements were performed using two spectral bands of laser-induced phosphorescence emission from BaMg2Al10O17:Eu (BAM) phosphors in a heated air flow to determine the optimal optical configuration for accurate temperature measurements. The originally planned multi-year research project ended prematurely after the first year due to the Sandia-sponsored student leaving the research group at the University of Michigan.

  17. Dynamics of the shallow plumbing system investigated from borehole strainmeters and cameras during the 15 March, 2007 Vulcanian paroxysm at Stromboli volcano

    Science.gov (United States)

    Bonaccorso, Alessandro; Calvari, Sonia; Linde, Alan; Sacks, Selwyn; Boschi, Enzo

    2012-12-01

    The 15 March, 2007 Vulcanian paroxysm at Stromboli volcano was recorded by several instruments that allowed description of the eruptive sequence and unraveling the processes in the upper feeding system. Among the devices installed on the island, two borehole strainmeters recorded unique signals not fully explored before. Here we present an analysis of these signals together with the time-lapse images from a monitoring system comprising both infrared and visual cameras. The two strainmeter signals display an initial phase of pressure growth in the feeding system lasting ˜2 min. This is followed by 25 s of low-amplitude oscillations of the two signals, that we interpret as a strong step-like overpressure building up in the uppermost conduit by the gas-rich magma accumulating below a thick pile of rock produced by crater rim collapses. This overpressure caused shaking of the ground, and triggered a number of small landslides of the inner crater rim recorded by the monitoring cameras. When the plug obstructing the crater was removed by the initial Vulcanian blast, the two strainmeter signals showed opposite sign, compatible with a depressurizing source at ˜1.5 km depth, at the junction between the intermediate and shallow feeding system inferred by previous studies. The sudden depressurization accompanying the Vulcanian blast caused an oscillation of the source composed by three cycles of about 20 s each with a decreasing amplitude, as well recorded by the strainmeters. The visible effect of this behavior was the initial Vulcanian blast and a 2-3 km high eruptive column followed by two lava fountainings displaying decreasing intensity and height. To our knowledge, this is the first time that such a behavior was observed on an open conduit volcano.

  18. Kitt Peak speckle camera.

    Science.gov (United States)

    Breckinridge, J B; McAlister, H A; Robinson, W G

    1979-04-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  19. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  20. New readout and data-acquisition system in an electron-tracking Compton camera for MeV gamma-ray astronomy (SMILE-II)

    Energy Technology Data Exchange (ETDEWEB)

    Mizumoto, T., E-mail: mizumoto@cr.scphys.kyoto-u.ac.jp [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Matsuoka, Y. [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Mizumura, Y. [Unit of Synergetic Studies for Space, Kyoto University, 606-8502 Kyoto (Japan); Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Tanimori, T. [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Unit of Synergetic Studies for Space, Kyoto University, 606-8502 Kyoto (Japan); Kubo, H.; Takada, A.; Iwaki, S.; Sawano, T.; Nakamura, K.; Komura, S.; Nakamura, S.; Kishimoto, T.; Oda, M.; Miyamoto, S.; Takemura, T.; Parker, J.D.; Tomono, D.; Sonoda, S. [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Miuchi, K. [Department of Physics, Kobe University, 658-8501 Kobe (Japan); Kurosawa, S. [Institute for Materials Research, Tohoku University, 980-8577 Sendai (Japan)

    2015-11-11

    For MeV gamma-ray astronomy, we have developed an electron-tracking Compton camera (ETCC) as a MeV gamma-ray telescope capable of rejecting the radiation background and attaining the high sensitivity of near 1 mCrab in space. Our ETCC comprises a gaseous time-projection chamber (TPC) with a micro pattern gas detector for tracking recoil electrons and a position-sensitive scintillation camera for detecting scattered gamma rays. After the success of a first balloon experiment in 2006 with a small ETCC (using a 10×10×15 cm{sup 3} TPC) for measuring diffuse cosmic and atmospheric sub-MeV gamma rays (Sub-MeV gamma-ray Imaging Loaded-on-balloon Experiment I; SMILE-I), a (30 cm){sup 3} medium-sized ETCC was developed to measure MeV gamma-ray spectra from celestial sources, such as the Crab Nebula, with single-day balloon flights (SMILE-II). To achieve this goal, a 100-times-larger detection area compared with that of SMILE-I is required without changing the weight or power consumption of the detector system. In addition, the event rate is also expected to dramatically increase during observation. Here, we describe both the concept and the performance of the new data-acquisition system with this (30 cm){sup 3} ETCC to manage 100 times more data while satisfying the severe restrictions regarding the weight and power consumption imposed by a balloon-borne observation. In particular, to improve the detection efficiency of the fine tracks in the TPC from ~10% to ~100%, we introduce a new data-handling algorithm in the TPC. Therefore, for efficient management of such large amounts of data, we developed a data-acquisition system with parallel data flow.

  1. Potential applications of a dual-sweep streak camera system for characterizing particle and photon beams of VUV, XUV, and x-ray FELS

    Energy Technology Data Exchange (ETDEWEB)

    Lumpkin, A. [Argonne National Lab., IL (United States)

    1995-12-31

    The success of time-resolved imaging techniques in the Characterization of particle beams and photon beams of the recent generation of L-band linac-driven or storage ring FELs in the infrared, visible, and ultraviolet wavelength regions can be extended to the VUV, XUV, and x-ray FELs. Tests and initial data have been obtained with the Hamamatsu C5680 dual-sweep streak camera system which includes a demountable photocathode (thin Au) assembly and a flange that allows windowless operation with the transport vacuum system. This system can be employed at wavelengths shorter than 100 nm and down to 1 {Angstrom}. First tests on such a system at 248-nm wavelengths have been performed oil the Argonne Wakefield Accelerator (AWA) drive laser source. A quartz window was used at the tube entrance aperture. A preliminary test using a Be window mounted on a different front flange of the streak tube to look at an x-ray bremsstrahlung source at the AWA was limited by photon statistics. This system`s limiting resolution of {sigma}{approximately}1.1 ps observed at 248 nm would increase with higher incoming photon energies to the photocathode. This effect is related to the fundamental spread in energies of the photoelectrons released from the photocathodes. Possible uses of the synchrotron radiation sources at the Advanced Photon Source and emerging short wavelength FELs to test the system will be presented.

  2. An optical metasurface planar camera

    CERN Document Server

    Arbabi, Amir; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are 2D arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optical design by enabling complex low cost systems where multiple metasurfaces are lithographically stacked on top of each other and are integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here, we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has an f-number of 0.9, an angle-of-view larger than 60$^\\circ$$\\times$60$^\\circ$, and operates at 850 nm wavelength with large transmission. The camera exhibits high image quality, which indicates the potential of this technology to produce a paradigm shift in future designs of imaging systems for microscopy, photograp...

  3. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  4. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    Science.gov (United States)

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  5. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    Science.gov (United States)

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  6. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    Directory of Open Access Journals (Sweden)

    Marcel Tresanchez

    2012-10-01

    Full Text Available This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6 processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  7. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  8. Analysis of red blood cells' dynamic status in a simulated blood circulation system using an ultrahigh-speed simultaneous framing optical electronic camera.

    Science.gov (United States)

    Zhang, Qiang; Li, Zeren; Zhao, Shuming; Wen, Weifeng; Chang, Lihua; Yu, Helian; Jiang, Tianlun

    2017-02-01

    Alterations in the morphologic and mechanical properties of red blood cells (RBCs) are considered direct indicators of blood quality. Current measures of characterizing these properties in vivo are limited by the complicated hemodynamic environment. To better evaluate the quality of fresh and stored blood, a new research platform was constructed to evaluate the hemodynamic characteristics of RBCs. The research platform consists mostly of a microfluidic chip, microscope, and ultrahigh-speed simultaneous framing optical electronic camera (USFOEC). The microfluidic chip was designed to simplify the complicated hemodynamic environment. The RBCs were diluted in erythrocyte preservative fluid and infused into the microfluidic channels. After approximately 600× magnification of using the microscope and camera, the RBCs' dynamic images were captured by the USFOEC. Eight sequential and blur-free images were simultaneously captured by the USFOEC system. Results showed that RBC deformation changed with flow velocity and stored RBCs were less sensitive to deformation (Kfresh  < Kstored ). The frozen-stored RBCs were better able to sustain hydrodynamic stress (DI49day  = 0.128 vs. DIfrozen  = 0.118) than cold-stored RBCs but more sensitive to variations in flow speed (K49day  = 1626.2 vs. Kfrozen  = 1318.2). Results showed that the stored RBCs had worse deformability than fresh RBCs, but frozen-stored RBCs may incur less damage during storage than those stored at merely cold temperatures. This USFOEC imaging system can serve as a platform for direct observation of cell morphological and mechanical properties in a medium similar to a physiologic environment. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  9. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  10. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  11. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  12. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    National Research Council Canada - National Science Library

    Toan Minh Hoang; Na Rae Baek; Se Woon Cho; Ki Wan Kim; Kang Ryoung Park

    2017-01-01

    .... In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi...

  13. Keck Autoguider and Camera Server Architecture

    Science.gov (United States)

    Lupton, W. F.

    On the Keck telescope, autoguiders are not tightly integrated into the telescope control system; an autoguider is just an instrument which happens to have been asked to send guide star positions to the telescope. A standard message interface has been defined, and any source of guide star positions which adheres to this interface can play the role of the autoguider. This means that it would be easy for science instruments with fast readout rates (this of course includes all thermal infra-red instruments) to provide guide star positions. Much of an autoguider's user interface and control logic is independent of the actual source of the guide star positions. Accordingly the Keck telescope has defined an internal ``camera server'' protocol which is used by camera-independent high-level autoguider software to control physical cameras. As yet this protocol is only supported by one type of camera (the Photometrics camera which is used for all Keck autoguiders). Support for other types of camera, for example an infra-red camera, is planned. The poster display will illustrate the Keck approach to autoguiding, will show some of the advantages and disadvantages of the Keck approach, and will discuss future plans.

  14. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  15. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  16. Upgrades to NDSF Vehicle Camera Systems and Development of a Prototype System for Migrating and Archiving Video Data in the National Deep Submergence Facility Archives at WHOI

    Science.gov (United States)

    Fornari, D.; Howland, J.; Lerner, S.; Gegg, S.; Walden, B.; Bowen, A.; Lamont, M.; Kelley, D.

    2003-12-01

    In recent years, considerable effort has been made to improve the visual recording capabilities of Alvin and ROV Jason. This has culminated in the routine use of digital cameras, both internal and external on these vehicles, which has greatly expanded the scientific recording capabilities of the NDSF. The UNOLS National Deep Submergence Facility (NDSF) archives maintained at Woods Hole Oceanograpic Institution (WHOI) are the repository for the diverse suite of photographic still images (both 35mm and recently digital), video imagery, vehicle data and navigation, and near-bottom side-looking sonar data obtained by the facility vehicles. These data comprise a unique set of information from a wide range of seafloor environments over the more than 25 years of NDSF operations in support of science. Included in the holdings are Alvin data plus data from the tethered vehicles- ROV Jason, Argo II, and the DSL-120 side scan sonar. This information conservatively represents an outlay in facilities and science costs well in excess of \\$100 million. Several archive related improvement issues have become evident over the past few years. The most critical are: 1. migration and better access to the 35mm Alvin and Jason still images through digitization and proper cataloging with relevant meta-data, 2. assessing Alvin data logger data, migrating data on older media no longer in common use, and properly labeling and evaluating vehicle attitude and navigation data, 3. migrating older Alvin and Jason video data, especially data recorded on Hi-8 tape that is very susceptible to degradation on each replay, to newer digital format media such as DVD, 4. improving the capabilities of the NDSF archives to better serve the increasingly complex needs of the oceanographic community, including researchers involved in focused programs like Ridge2000 and MARGINS, where viable distributed databases in various disciplinary topics will form an important component of the data management structure

  17. An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection.

    Science.gov (United States)

    de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2013-12-01

    The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Wired and Wireless Camera Triggering with Arduino

    Science.gov (United States)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  19. Toward a miniaturized fundus camera.

    Science.gov (United States)

    Gliss, Christine; Parel, Jean-Marie; Flynn, John T; Pratisto, Hans; Niederer, Peter

    2004-01-01

    Retinopathy of prematurity (ROP) describes a pathological development of the retina in prematurely born children. In order to prevent severe permanent damage to the eye and enable timely treatment, the fundus of the eye in such children has to be examined according to established procedures. For these examinations, our miniaturized fundus camera is intended to allow the acquisition of wide-angle digital pictures of the fundus for on-line or off-line diagnosis and documentation. We designed two prototypes of a miniaturized fundus camera, one with graded refractive index (GRIN)-based optics, the other with conventional optics. Two different modes of illumination were compared: transscleral and transpupillary. In both systems, the size and weight of the camera were minimized. The prototypes were tested on young rabbits. The experiments led to the conclusion that the combination of conventional optics with transpupillary illumination yields the best results in terms of overall image quality. (c) 2004 Society of Photo-Optical Instrumentation Engineers.

  20. Wide angle pinhole camera

    Science.gov (United States)

    Franke, J. M.

    1978-01-01

    Hemispherical refracting element gives pinhole camera 180 degree field-of-view without compromising its simplicity and depth-of-field. Refracting element, located just behind pinhole, bends light coming in from sides so that it falls within image area of film. In contrast to earlier pinhole cameras that used water or other transparent fluids to widen field, this model is not subject to leakage and is easily loaded and unloaded with film. Moreover, by selecting glass with different indices of refraction, field at film plane can be widened or reduced.

  1. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  2. Using Single-Camera 3-D Imaging to Guide Material Handling Robots in a Nuclear Waste Package Closure System

    Energy Technology Data Exchange (ETDEWEB)

    Rodney M. Shurtliff

    2005-09-01

    Nuclear reactors for generating energy and conducting research have been in operation for more than 50 years, and spent nuclear fuel and associated high-level waste have accumulated in temporary storage. Preparing this spent fuel and nuclear waste for safe and permanent storage in a geological repository involves developing a robotic packaging system—a system that can accommodate waste packages of various sizes and high levels of nuclear radiation. During repository operation, commercial and government-owned spent nuclear fuel and high-level waste will be loaded into casks and shipped to the repository, where these materials will be transferred from the casks into a waste package, sealed, and placed into an underground facility. The waste packages range from 12 to 20 feet in height and four and a half to seven feet in diameter. Closure operations include sealing the waste package and all its associated functions, such as welding lids onto the container, filling the inner container with an inert gas, performing nondestructive examinations on welds, and conducting stress mitigation. The Idaho National Laboratory is designing and constructing a prototype Waste Package Closure System (WPCS). Control of the automated material handling is an important part of the overall design. Waste package lids, welding equipment, and other tools must be moved in and around the closure cell during the closure process. These objects are typically moved from tool racks to a specific position on the waste package to perform a specific function. Periodically, these objects are moved from a tool rack or the waste package to the adjacent glovebox for repair or maintenance. Locating and attaching to these objects with the remote handling system, a gantry robot, in a loosely fixtured environment is necessary for the operation of the closure cell. Reliably directing the remote handling system to pick and place the closure cell equipment within the cell is the major challenge.

  3. A Study on Social Issue Solutions Using the “Internet of Things” (Focusing on a Crime Prevention Camera System)

    OpenAIRE

    Lee, Hong Joo

    2015-01-01

    Many researchers in various fields are seeking solutions to such problems through political and administrative measures. However, concrete solutions have yet to be presented. Researches that seek solutions to the said problems using new technologies are increasing. In this study, a solution for changing and improving society was proposed by investigating social issue solutions and technologies centered on social organizations and systems. To do that, the types of social issues were analyzed b...

  4. Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Mo, Se Hyun [Amotech, Seoul (Korea, Republic of); Jeon, Young Pil [Samsung Electronics Co., Ltd. Suwon (Korea, Republic of); Park, Jong Ho [Seonam Univ., Namwon (Korea, Republic of); Chong, Kil To [Chon-buk Nat' 1 Univ., Junju (Korea, Republic of)

    2017-07-15

    With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

  5. Video Head Impulse Tests with a Remote Camera System: Normative Values of Semicircular Canal Vestibulo-Ocular Reflex Gain in Infants and Children

    Directory of Open Access Journals (Sweden)

    Sylvette R. Wiener-Vacher

    2017-09-01

    Full Text Available The video head impulse test (VHIT is widely used to identify semicircular canal function impairments in adults. But classical VHIT testing systems attach goggles tightly to the head, which is not tolerated by infants. Remote video detection of head and eye movements resolves this issue and, here, we report VHIT protocols and normative values for children. Vestibulo-ocular reflex (VOR gain was measured for all canals of 303 healthy subjects, including 274 children (aged 2.6 months–15 years and 26 adults (aged 16–67. We used the Synapsys® (Marseilles, France VHIT Ulmer system whose remote camera measures head and eye movements. HITs were performed at high velocities. Testing typically lasts 5–10 min. In infants as young as 3 months old, VHIT yielded good inter-measure replicability. VOR gain increases rapidly until about the age of 6 years (with variation among canals, then progresses more slowly to reach adult values by the age of 16. Values are more variable among very young children and for the vertical canals, but showed no difference for right versus left head rotations. Normative values of VOR gain are presented to help detect vestibular impairment in patients. VHIT testing prior to cochlear implants could help prevent total vestibular loss and the resulting grave impairments of motor and cognitive development in patients with residual unilateral vestibular function.

  6. Remote sensing of multiple vital signs using a CMOS camera-equipped infrared thermography system and its clinical application in rapidly screening patients with suspected infectious diseases.

    Science.gov (United States)

    Sun, Guanghao; Nakayama, Yosuke; Dagdanpurev, Sumiyakhand; Abe, Shigeto; Nishimura, Hidekazu; Kirimoto, Tetsuo; Matsui, Takemi

    2017-02-01

    Infrared thermography (IRT) is used to screen febrile passengers at international airports, but it suffers from low sensitivity. This study explored the application of a combined visible and thermal image processing approach that uses a CMOS camera equipped with IRT to remotely sense multiple vital signs and screen patients with suspected infectious diseases. An IRT system that produced visible and thermal images was used for image acquisition. The subjects' respiration rates were measured by monitoring temperature changes around the nasal areas on thermal images; facial skin temperatures were measured simultaneously. Facial blood circulation causes tiny color changes in visible facial images that enable the determination of the heart rate. A logistic regression discriminant function predicted the likelihood of infection within 10s, based on the measured vital signs. Sixteen patients with an influenza-like illness and 22 control subjects participated in a clinical test at a clinic in Fukushima, Japan. The vital-sign-based IRT screening system had a sensitivity of 87.5% and a negative predictive value of 91.7%; these values are higher than those of conventional fever-based screening approaches. Multiple vital-sign-based screening efficiently detected patients with suspected infectious diseases. It offers a promising alternative to conventional fever-based screening. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. HIGH SPEED KERR CELL FRAMING CAMERA

    Science.gov (United States)

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  8. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  9. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  10. The canopy camera

    Science.gov (United States)

    Harry E. Brown

    1962-01-01

    The canopy camera is a device of new design that takes wide-angle, overhead photographs of vegetation canopies, cloud cover, topographic horizons, and similar subjects. Since the entire hemisphere is photographed in a single exposure, the resulting photograph is circular, with the horizon forming the perimeter and the zenith the center. Photographs of this type provide...

  11. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  12. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  13. Development of SED Camera for Quasars in Early Universe (SQUEAN)

    OpenAIRE

    Kim, Sanghyuk; Jeon, Yiseul; Lee, Hye-In; Park, Woojin; Ji, Tae-Geun; Hyun, Minhee; Choi, Changsu; Im, Myungshin; Pak, Soojong

    2016-01-01

    We describe the characteristics and performance of a camera system, Spectral energy distribution Camera for Quasars in Early Universe (SQUEAN). It was developed to measure SEDs of high redshift quasar candidates (z $\\gtrsim$ 5) and other targets, e.g., young stellar objects, supernovae, and gamma-ray bursts, and to trace the time variability of SEDs of objects such as active galactic nuclei (AGNs). SQUEAN consists of an on-axis focal plane camera module, an auto-guiding system, and mechanical...

  14. Low light performance of digital still cameras

    Science.gov (United States)

    Wueller, Dietmar

    2013-03-01

    The major difference between a dSLR camera, a consumer camera, and a camera in a mobile device is the sensor size. The sensor size is also related to the over all system size including the lens. With the sensors getting smaller the individual light sensitive areas are also getting smaller leaving less light falling onto each of the pixels. This effect requires higher signal amplification that leads to higher noise levels or other problems that may occur due to denoising algorithms. These Problems become more visible at low light conditions because of the lower signal levels. The fact that the sensitivity of cameras decreases makes customers ask for a standardized way to measure low light performance of cameras. The CEA (Consumer Electronics Association) together with ANSI has addressed this for camcorders in the CEA-639 [1] standard. The ISO technical committee 42 (photography) is currently also thinking about a potential standard on this topic for still picture cameras. This paper is part of the preparation work for this standardization activity and addresses the differences compared to camcorders and also potential additional problems with noise reduction that have occurred over the past few years. The result of this paper is a proposed test procedure with a few open questions that have to be answered in future work.

  15. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  16. Streak camera techniques

    Energy Technology Data Exchange (ETDEWEB)

    Avara, R.

    1977-06-01

    An introduction to streak camera geometry, experimental techniques, and limitations are presented. Equations, graphs and charts are included to provide useful data for optimizing the associated optics to suit each experiment. A simulated analysis is performed on simultaneity and velocity measurements. An error analysis is also performed for these measurements utilizing the Monte Carlo method to simulate the distribution of uncertainties associated with simultaneity-time measurements.

  17. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  18. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  19. Feasibility of a lateral region sentinel node biopsy of lower rectal cancer guided by indocyanine green using a near-infrared camera system.

    Science.gov (United States)

    Noura, Shingo; Ohue, Masayuki; Seki, Yosuke; Tanaka, Koji; Motoori, Masaaki; Kishi, Kentaro; Miyashiro, Isao; Ohigashi, Hiroaki; Yano, Masahiko; Ishikawa, Osamu; Miyamoto, Yasuhide

    2010-01-01

    A lateral pelvic lymph node dissection (LPLD) for lower rectal cancer may be beneficial for a limited number of patients. If sentinel node (SN) navigation surgery could be applied to lower rectal cancer, then unnecessary LPLDs could be avoided. The aim of this study was to investigate the feasibility of lateral region SN biopsy by means of indocyanine green (ICG) visualized with a near-infrared camera system (Photodynamic Eye, PDE). This study investigated the existence of a lateral region SN in 25 patients with lower rectal cancer. ICG was injected around the tumor, and the lateral pelvic region was observed with PDE. With PDE, the lymph nodes and lymph vessels that received ICG appeared as shining fluorescent spots and streams in the fluorescence image. This allowed the detection of not only tumor-negative SNs but also tumor-positive SNs as shining spots. The lateral SNs were detected in 6 of 6 T1 and T2 diseases and 17 of 19 T3 diseases. The lateral SNs were successfully identified in 23 (92%) of the 25 patients. The mean number of lateral SNs per patients was 2.1. Of the 23 patients, 6 patients underwent LPLD. Of the 3 patients who had a tumor-negative SN, all dissected lateral non-SNs were negative in all 3 cases. We could detect the lateral SNs, not only in T1 and T2 disease, but also in T3 disease. Although this is only a preliminary study, the detection of lateral SNs in lower rectal cancer by means of the ICG fluorescence imaging system is considered to be a promising technique that may be used for determining the indications for performing LPLD.

  20. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  1. Preliminary experience for the evaluation of the intraoperative graft patency with real color charge-coupled device camera system: an advanced device for simultaneous capturing of color and near-infrared images during coronary artery bypass graft.

    Science.gov (United States)

    Handa, Takemi; Katare, Rajesh G; Sasaguri, Shiro; Sato, Takayuki

    2009-08-01

    We developed a new color charge-coupled device (CCD) camera for the intraoperative indocyanine green (ICG) angiography. This device consists of a combination of custom-made optical filters and an ultra-high sensitive CCD image sensor, which can detect simultaneously color and near-infrared (NIR) rays from 380 to 1200 nm. We showed a comparison between our system and other devices for the preliminary experience. We routinely performed both transit-time flowmetry (TFM) and color images for intraoperative assessment, thallium-scintigraphy for the early postoperative assessment, and then angiography after 1-year surgery. We also obtained intraoperative graft flows and images in 116 grafts. Although TFM indicated a graft patency, the CCD camera suspected perfusion failures in four grafts. Also the analysis of the ICG fluorescence intensity showed the significant hypoperfusion at the perfusion territory distal to the anastomosis (graft vs. perfusion territory; 230+/-26 vs. 156+/-13 a.u, P=0.02). When the CCD camera suspected a graft failure, CCD camera and angiography showed a comparable graft failure. The unique device that visualized ICG-enhanced structures against a background of natural myocardial color improved the visibility of abnormality in flow and perfusion. Our findings show that this device may become a standard intraoperative graft and perfusion assessment tool in coronary artery bypass graft (CABG).

  2. Quantitative analysis of digital outcrop data obtained from stereo-imagery using an emulator for the PanCam camera system for the ExoMars 2020 rover

    Science.gov (United States)

    Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu

    2017-04-01

    A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover

  3. Evaluation of dynamic range for LLNL streak cameras using high contrast pulses and pulse podiatry'' on the Nova laser system

    Energy Technology Data Exchange (ETDEWEB)

    Richards, J.B.; Weiland, T.L.; Prior, J.A.

    1990-07-01

    A standard LLNL streak camera has been used to analyze high contrast pulses on the Nova laser facility. These pulses have a plateau at their leading edge (foot) with an amplitude which is approximately 1% of the maximum pulse height. Relying on other features of the pulses and on signal multiplexing, we were able to determine how accurately the foot amplitude was being represented by the camera. Results indicate that the useful single channel dynamic range of the instrument approaches 100:1. 1 ref., 4 figs., 1 tab.

  4. A study on the sensitivity of photogrammetric camera calibration and stitching

    CSIR Research Space (South Africa)

    De

    2014-11-01

    Full Text Available This paper presents a detailed simulation study of an automated robotic photogrammetric camera calibration system. The system performance was tested for sensitivity with regard to noise in the robot movement, camera mounting and image processing...

  5. New developments to improve SO2 cameras

    Science.gov (United States)

    Luebcke, P.; Bobrowski, N.; Hoermann, C.; Kern, C.; Klein, A.; Kuhn, J.; Vogel, L.; Platt, U.

    2012-12-01

    The SO2 camera is a remote sensing instrument that measures the two-dimensional distribution of SO2 (column densities) in volcanic plumes using scattered solar radiation as a light source. From these data SO2-fluxes can be derived. The high time resolution of the order of 1 Hz allows correlating SO2 flux measurements with other traditional volcanological measurement techniques, i.e., seismology. In the last years the application of SO2 cameras has increased, however, there is still potential to improve the instrumentation. First of all, the influence of aerosols and ash in the volcanic plume can lead to large errors in the calculated SO2 flux, if not accounted for. We present two different concepts to deal with the influence of ash and aerosols. The first approach uses a co-axial DOAS system that was added to a two filter SO2 camera. The camera used Filter A (peak transmission centred around 315 nm) to measures the optical density of SO2 and Filter B (centred around 330 nm) to correct for the influence of ash and aerosol. The DOAS system simultaneously performs spectroscopic measurements in a small area of the camera's field of view and gives additional information to correct for these effects. Comparing the optical densities for the two filters with the SO2 column density from the DOAS allows not only a much more precise calibration, but also to draw conclusions about the influence from ash and aerosol scattering. Measurement examples from Popocatépetl, Mexico in 2011 are shown and interpreted. Another approach combines the SO2 camera measurement principle with the extremely narrow and periodic transmission of a Fabry-Pérot interferometer. The narrow transmission window allows to select individual SO2 absorption bands (or series of bands) as a substitute for Filter A. Measurements are therefore more selective to SO2. Instead of Filter B, as in classical SO2 cameras, the correction for aerosol can be performed by shifting the transmission window of the Fabry

  6. Digital camera in ophthalmology

    Directory of Open Access Journals (Sweden)

    Ashish Mitra

    2015-01-01

    Full Text Available Ophthalmology is an expensive field and imaging is an indispensable modality in ophthalmology; and in developing countries including India, it is not possible for every ophthalmologist to afford slit-lamp photography unit. We here present our experience of slit-lamp photography using digital camera. Good quality pictures of anterior and posterior segment disorders were captured using readily available devices. It can be a used as a good teaching tool for residents learning ophthalmology and can also be a method to document lesions which at many times is necessary for medicolegal purposes. It's a technique which is simple, inexpensive, and has a short learning curve.

  7. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Santhanam, A; Min, Y; Beron, P; Agazaryan, N; Kupelian, P; Low, D [UCLA, Los Angeles, CA (United States)

    2016-06-15

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter. To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.

  8. Performance evaluation and clinical applications of 3D plenoptic cameras

    Science.gov (United States)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  9. Computational imaging for miniature cameras

    Science.gov (United States)

    Salahieh, Basel

    Miniature cameras play a key role in numerous imaging applications ranging from endoscopy and metrology inspection devices to smartphones and head-mount acquisition systems. However, due to the physical constraints, the imaging conditions, and the low quality of small optics, their imaging capabilities are limited in terms of the delivered resolution, the acquired depth of field, and the captured dynamic range. Computational imaging jointly addresses the imaging system and the reconstructing algorithms to bypass the traditional limits of optical systems and deliver better restorations for various applications. The scene is encoded into a set of efficient measurements which could then be computationally decoded to output a richer estimate of the scene as compared with the raw images captured by conventional imagers. In this dissertation, three task-based computational imaging techniques are developed to make low-quality miniature cameras capable of delivering realistic high-resolution reconstructions, providing full-focus imaging, and acquiring depth information for high dynamic range objects. For the superresolution task, a non-regularized direct superresolution algorithm is developed to achieve realistic restorations without being penalized by improper assumptions (e.g., optimizers, priors, and regularizers) made in the inverse problem. An adaptive frequency-based filtering scheme is introduced to upper bound the reconstruction errors while still producing more fine details as compared with previous methods under realistic imaging conditions. For the full-focus imaging task, a computational depth-based deconvolution technique is proposed to bring a scene captured by an ordinary fixed-focus camera to a full-focus based on a depth-variant point spread function prior. The ringing artifacts are suppressed on three levels: block tiling to eliminate boundary artifacts, adaptive reference maps to reduce ringing initiated by sharp edges, and block-wise deconvolution or

  10. A GRAPH BASED BUNDLE ADJUSTMENT FOR INS-CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    D. Bender

    2013-08-01

    Full Text Available In this paper, we present a graph based approach for performing the system calibration of a sensor suite containing a fixed mounted camera and an inertial navigation system. The aim of the presented work is to obtain accurate direct georeferencing of camera images collected with small unmanned aerial systems. Prerequisite for using the pose measurements from the inertial navigation system as exterior orientation for the camera is the knowledge of the static offsets between these devices. Furthermore, the intrinsic parameters of the camera obtained in a laboratory tend to deviate slightly from the values during flights. This induces an in-flight calibration of the intrinsic camera parameters in addition to the mounting offsets between the two devices. The optimization of these values can be done by introducing them as parameters into a bundle adjustment process. We show how to solve this by exploiting a graph optimization framework, which is designed for the least square optimization of general error functions.

  11. a Graph Based Bundle Adjustment for Ins-Camera Calibration

    Science.gov (United States)

    Bender, D.; Schikora, M.; Sturm, J.; Cremers, D.

    2013-08-01

    In this paper, we present a graph based approach for performing the system calibration of a sensor suite containing a fixed mounted camera and an inertial navigation system. The aim of the presented work is to obtain accurate direct georeferencing of camera images collected with small unmanned aerial systems. Prerequisite for using the pose measurements from the inertial navigation system as exterior orientation for the camera is the knowledge of the static offsets between these devices. Furthermore, the intrinsic parameters of the camera obtained in a laboratory tend to deviate slightly from the values during flights. This induces an in-flight calibration of the intrinsic camera parameters in addition to the mounting offsets between the two devices. The optimization of these values can be done by introducing them as parameters into a bundle adjustment process. We show how to solve this by exploiting a graph optimization framework, which is designed for the least square optimization of general error functions.

  12. Vibration factors impact analysis on aerial film camera imaging quality

    Science.gov (United States)

    Xie, Jun; Han, Wei; Xu, Zhonglin; Tan, Haifeng; Yang, Mingquan

    2017-08-01

    Aerial film camera can acquire ground target image information advantageous, but meanwhile the change of aircraft attitude, the film features and the work of camera inside system could result in a vibration which could depress the image quality greatly. This paper presented a design basis of vibration mitigation stabilized platform based on the vibration characteristic of the aerial film camera and indicated the application analysis that stabilized platform could support aerial camera to realize the shoot demand of multi-angle and large scale. According to the technique characteristics of stabilized platform, the development direction are high precision, more agility, miniaturization and low power.

  13. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  14. Simultaneous in-plane and out-of-plane displacement measurement based on a dual-camera imaging system and its application to inspection of large-scale space structures

    Science.gov (United States)

    Ri, Shien; Tsuda, Hiroshi; Yoshida, Takeshi; Umebayashi, Takashi; Sato, Akiyoshi; Sato, Eiichi

    2015-07-01

    Optical methods providing full-field deformation data have potentially enormous interest for mechanical engineers. In this study, an in-plane and out-of-plane displacement measurement method based on a dual-camera imaging system is proposed. The in-plane and out-of-plane displacements are determined simultaneously using two measured in-plane displacement data observed from two digital cameras at different view angles. The fundamental measurement principle and experimental results of accuracy confirmation are presented. In addition, we applied this method to the displacement measurement in a static loading and bending test of a solid rocket motor case (CFRP material; 2.2 m diameter and 2.3 m long) for an up-to-date Epsilon rocket developed by JAXA. The effectiveness and measurement accuracy is confirmed by comparing with conventional displacement sensor. This method could be useful to diagnose the reliability of large-scale space structures in the rocket development.

  15. Monitoring of Wheat Growth Status and Mapping of Wheat Yield’s within-Field Spatial Variations Using Color Images Acquired from UAV-camera System

    Directory of Open Access Journals (Sweden)

    Mengmeng Du

    2017-03-01

    Full Text Available Applications of remote sensing using unmanned aerial vehicle (UAV in agriculture has proved to be an effective and efficient way of obtaining field information. In this study, we validated the feasibility of utilizing multi-temporal color images acquired from a low altitude UAV-camera system to monitor real-time wheat growth status and to map within-field spatial variations of wheat yield for smallholder wheat growers, which could serve as references for site-specific operations. Firstly, eight orthomosaic images covering a small winter wheat field were generated to monitor wheat growth status from heading stage to ripening stage in Hokkaido, Japan. Multi-temporal orthomosaic images indicated straightforward sense of canopy color changes and spatial variations of tiller densities. Besides, the last two orthomosaic images taken from about two weeks prior to harvesting also notified the occurrence of lodging by visual inspection, which could be used to generate navigation maps guiding drivers or autonomous harvesting vehicles to adjust operation speed according to specific lodging situations for less harvesting loss. Subsequently orthomosaic images were geo-referenced so that further study on stepwise regression analysis among nine wheat yield samples and five color vegetation indices (CVI could be conducted, which showed that wheat yield correlated with four accumulative CVIs of visible-band difference vegetation index (VDVI, normalized green-blue difference index (NGBDI, green-red ratio index (GRRI, and excess green vegetation index (ExG, with the coefficient of determination and RMSE as 0.94 and 0.02, respectively. The average value of sampled wheat yield was 8.6 t/ha. The regression model was also validated by using leave-one-out cross validation (LOOCV method, of which root-mean-square error of predication (RMSEP was 0.06. Finally, based on the stepwise regression model, a map of estimated wheat yield was generated, so that within

  16. Junocam: Juno's Outreach Camera

    Science.gov (United States)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  17. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  18. A study of fish behaviour in the extension of a demersal trawl using a multi-compartment separator frame and SIT camera system

    DEFF Research Database (Denmark)

    Krag, Ludvig Ahm; Madsen, Niels; Karlsen, Junita

    2009-01-01

    A rigid separator frame with three vertically stacked codends was used to study fish behaviour in the extension piece of a demersal trawl. A video camera recorded fish as they encountered the separator frame. Ten hauls were conducted in a mixed species fishery in the northern North Sea. Fish...... fish behaviour within the trawl, and together the two methods provided a more complete picture of the catching process. Behavioural observations, vertical distribution, and the methodology are discussed, as is the potential for improving species separation in demersal trawls....... behaviour was analysed using the camera observations from several of these hauls by assigning seven descriptive attributes and also using catch data. Gadoids, in particular haddock (Melanogrammus aeglefinus), whiting (Merlangius merlangus), and saithe (Pollachius virens), were caught in the upper codend...

  19. A Real Time Coincidence System for High Count-Rate TOF or Non-TOF PET Cameras Using Hybrid Method Combining AND-Logic and Time-Mark Technology.

    Science.gov (United States)

    Wang, Chao; Li, Hongdi; Ramirez, Rocio A; Zhang, Yuxuan; Baghaei, Hossain; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi

    2010-04-01

    A fully digital FPGA-based high count-rate coincidence system has been developed for TOF (Time of Flight) and non-TOF PET cameras. Using a hybrid of AND-logic and Time-mark technology produced both excellent timing resolution and high processing speed. In this hybrid architecture, every gamma event was synchronized by a 125 MHz system clock and generating a trigger associated with a time-mark given by an 8-bit high-resolution TDC (68.3 ps/bin). AND-logic was applied to the synchronized triggers for the real-time raw sorting of coincident events. An efficient FPGA based Time-mark fine-sort algorithm is used to select all the possible coincidence events within the preset coincidence time window. This FPGA-based coincidence system for a modular PET camera offers reprogrammable flexibility and expandability, so the coincidence system is easily employed, regardless of differences in the scale of the PET camera detector setup. A distributed processing method and pipeline technology were adopted in the design to obtain very high processing speed. In this design, both prompt and time-delayed accidental coincidences are simultaneously processed in real time. The real-time digital coincidence system supports coincidence in 2 to 12 detector module setups, capable of processing 72 million single events per second with no digital data loss and captures multiple-event coincidence for better imaging performance evaluation. The coincidence time window-size and time-offset of each coincidence event pair can be programmed independently in 68.3 ps increments (TDC LSB) during the data acquisition in different applications to optimize the signal-to-noise ratio. The complex coincidence system is integrated in one circuit board with 1.5 Gbps fiber optic interface. We demonstrated the system performance using the actual circuit and Monte Carlo simulations.

  20. The Digital Camera Application in the Taiwan Light Source

    CERN Document Server

    Kuo, C H; Hsu, K T; Hsu, S Y; Hu, K H; Lee, D; Wang, C J; Yang, Y T

    2005-01-01

    Digital camera has been adopted for the booster, storage ring and transport-line diagnostic recently at the Taiwan Light Source. The system provides low image distortion transmission over long distance. The system is integrated with control system. Each screen monitor equip with a digital camera. These screen monitors are used for beam profile measurement and help injection condition optimization. Wider dynamic range and highly flexibility of the digital gated camera provide various functional enhancements. System configuration and present status will be summary in this report.

  1. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  2. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    Science.gov (United States)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  3. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  4. Geometric Stability and Lens Decentering in Compact Digital Cameras

    Science.gov (United States)

    Sanz-Ablanedo, Enoc; Rodríguez-Pérez, José Ramón; Armesto, Julia; Taboada, María Flor Álvarez

    2010-01-01

    A study on the geometric stability and decentering present in sensor-lens systems of six identical compact digital cameras has been conducted. With regard to geometrical stability, the variation of internal geometry parameters (principal distance, principal point position and distortion parameters) was considered. With regard to lens decentering, the amount of radial and tangential displacement resulting from decentering distortion was related with the precision of the camera and with the offset of the principal point from the geometric center of the sensor. The study was conducted with data obtained after 372 calibration processes (62 per camera). The tests were performed for each camera in three situations: during continuous use of the cameras, after camera power off/on and after the full extension and retraction of the zoom-lens. Additionally, 360 new calibrations were performed in order to study the variation of the internal geometry when the camera is rotated. The aim of this study was to relate the level of stability and decentering in a camera with the precision and quality that can be obtained. An additional goal was to provide practical recommendations about photogrammetric use of such cameras. PMID:22294886

  5. Establishing a common coordinate view in multiple moving aerial cameras

    Science.gov (United States)

    Sheikh, Yaser; Gritai, Alexei; Junejo, Imran; Muise, Robert; Mahalanobis, Abhijit; Shah, Mubarak

    2005-05-01

    A camera mounted on an aerial vehicle provides an excellent means of monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. The underlying concept of such co-operative sensing is to use inter-camera relationships to give global context to 'locally' obtained information at each camera. It is desirable, therefore, that the data collected at each camera and the inter-camera relationship discerned by the system be presented in a coherent visualization. Since the cameras are mounted on UAVs, large swaths of areas may be traversed in a short period of time, coherent visualization is indispensable for applications like surveillance and reconnaissance. While most visualization approaches have hitherto focused on data from a single camera at a time, as a consequence of tracking objects across cameras, we show that widely separated mosaics can be aligned, both in space and color, for concurrent visualization. Results are shown on a number of real sequences, validating our qualitative models.

  6. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  7. Measuring joint kinematics of treadmill walking and running: Comparison between an inertial sensor based system and a camera-based system.

    Science.gov (United States)

    Nüesch, Corina; Roos, Elena; Pagenstert, Geert; Mündermann, Annegret

    2017-05-24

    Inertial sensor systems are becoming increasingly popular for gait analysis because their use is simple and time efficient. This study aimed to compare joint kinematics measured by the inertial sensor system RehaGait® with those of an optoelectronic system (Vicon®) for treadmill walking and running. Additionally, the test re-test repeatability of kinematic waveforms and discrete parameters for the RehaGait® was investigated. Twenty healthy runners participated in this study. Inertial sensors and reflective markers (PlugIn Gait) were attached according to respective guidelines. The two systems were started manually at the same time. Twenty consecutive strides for walking and running were recorded and each software calculated sagittal plane ankle, knee and hip kinematics. Measurements were repeated after 20min. Ensemble means were analyzed calculating coefficients of multiple correlation for waveforms and root mean square errors (RMSE) for waveforms and discrete parameters. After correcting the offset between waveforms, the two systems/models showed good agreement with coefficients of multiple correlation above 0.950 for walking and running. RMSE of the waveforms were below 5° for walking and below 8° for running. RMSE for ranges of motion were between 4° and 9° for walking and running. Repeatability analysis of waveforms showed very good to excellent coefficients of multiple correlation (>0.937) and RMSE of 3° for walking and 3-7° for running. These results indicate that in healthy subjects sagittal plane joint kinematics measured with the RehaGait® are comparable to those using a Vicon® system/model and that the measured kinematics have a good repeatability, especially for walking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Camera for Quasars in Early Universe (CQUEAN)

    Science.gov (United States)

    Park, Won-Kee; Pak, Soojong; Im, Myungshin; Choi, Changsu; Jeon, Yiseul; Chang, Seunghyuk; Jeong, Hyeonju; Lim, Juhee; Kim, Eunbin

    2012-08-01

    We describe the overall characteristics and the performance of an optical CCD camera system, Camera for Quasars in Early Universe (CQUEAN), which has been used at the 2.1 m Otto Struve Telescope of the McDonald Observatory since 2010 August. CQUEAN was developed for follow-up imaging observations of red sources such as high-redshift quasar candidates (z ≳ 5), gamma-ray bursts, brown dwarfs, and young stellar objects. For efficient observations of the red objects, CQUEAN has a science camera with a deep-depletion CCD chip, which boasts a higher quantum efficiency at 0.7-1.1 μm than conventional CCD chips. The camera was developed in a short timescale () and has been working reliably. By employing an autoguiding system and a focal reducer to enhance the field of view on the classical Cassegrain focus, we achieve a stable guiding in 20 minute exposures, an imaging quality with FWHM≥0.6‧‧ over the whole field (4.8‧ × 4.8‧), and a limiting magnitude of z = 23.4 AB mag at 5-σ with 1 hr total integration time. This article includes data taken at the McDonald Observatory of The University of Texas at Austin.

  9. NIR Camera/spectrograph: TEQUILA

    Science.gov (United States)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  10. CCD camera for an autoguider

    Science.gov (United States)

    Schempp, William V.

    1991-06-01

    The requirements of a charge coupled device (CCD) autoguider camera and the specifications of a camera that we propose to build to meet those requirements will be discussed. The design goals of both the package and the electronics will be considered.

  11. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  12. A novel fully integrated handheld gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Massari, R.; Ucci, A.; Campisi, C. [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy); Scopinaro, F. [University of Rome “La Sapienza”, S. Andrea Hospital, Rome (Italy); Soluri, A., E-mail: alessandro.soluri@ibb.cnr.it [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy)

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  13. A YAP camera for the biodistribution of {sup 188}Re conjugated with Hyaluronic-Acid in 'in vivo' systems

    Energy Technology Data Exchange (ETDEWEB)

    Antoccia, A. [Department of Biology, Roma3 University (Italy); INFN, Roma3 (Italy); Baldazzi, G. [Department of Physics, Bologna University (Italy); INFN, Bologna (Italy); Banzato, A. [Department of Oncology and Surgical Sciences, Padova University (Italy); Bello, M. [INFN, National Laboratories, Legnaro (Italy); Department of Physics, Padova University (Italy); Boccaccio, P. [INFN, National Laboratories, Legnaro (Italy); Bollini, D. [Department of Physics, Bologna University (Italy); INFN, Bologna (Italy); De Notaristefani, F. [INFN, Roma3 (Italy); Department of Electronic Engineering, Roma3 University and INFN (Italy); Mazzi, U. [Department of Pharmaceutical Sciences, Padova University (Italy); Alafort, L.M. [Department of Pharmaceutical Sciences, Padova University (Italy); Moschini, G. [INFN, National Laboratories, Legnaro (Italy); Department of Physics, Padova University (Italy); Navarria, F.L. [Department of Physics, Bologna University (Italy); INFN, Bologna (Italy); Pani, R. [Department of Experimental Medecine and Pathology, Roma1 University (Italy); INFN, Roma1 (Italy); Perrotta, A. [INFN, Bologna (Italy)]. E-mail: perrotta@bo.infn.it; Rosato, A. [Department of Oncology and Surgical Sciences, Padova University (Italy); Istituto Oncologico Veneto, Padova (Italy); Tanzarella, C. [Department of Biology, Roma3 University (Italy); Uzunov, N.M. [INFN, National Laboratories, Legnaro (Italy); Dept. Natural Sciences, Shumen Univ. (Bulgaria)

    2007-02-01

    The aim of the SCINTIRAD experiment is to determine the radio-response of {sup 188}Rhenium (Re) in in vitro cells and the biodistribution in different organs of in vivo mice, and subsequently to assess the therapeutic effect on liver tumours induced in mice. Both the {gamma}- and {beta}- emissions of {sup 188}Re have been exploited in the experiment. The in vivo biodistribution in mice was studied also with a {gamma}-camera using different parallel hole collimators. In the {sup 188}Re spectrum, while the 155 keV {gamma}-peak is useful for imaging, the photons emitted at larger energies and the {beta}-particles act as noise in the image reconstruction. The {gamma}-cameras previously used to image biodistributions obtained with {sup 99}Tc are, therefore, not optimized for use with {sup 188}Re. A new setup of the {gamma}-camera has been studied for {sup 188}Re: 66x66 YAP:Ce crystals (0.6x0.6x10 mm{sup 3}, 5 {mu}m optical insulation) guarantee a FOV of 40x40 mm{sup 2}, a Hamamatsu R2486 PSPMT, 3 in. diameter, converts their light into an electrical signal and allows reconstructing the spatial coordinates of the light spot; incoming photon directions are selected through a lead collimator with 1.5 mm diameter hexagonal holes, 0.18 mm septa, 40 mm thickness. Using this setup, results have been obtained both with {sup 99}Tc filled and {sup 188}Re filled capillaries and wells. The energy spectrum of the collected photons and the spatial resolutions obtainable with the {sup 188}Re source will be presented.

  14. Relating vanishing points to catadioptric camera calibration

    Science.gov (United States)

    Duan, Wenting; Zhang, Hui; Allinson, Nigel M.

    2013-01-01

    This paper presents the analysis and derivation of the geometric relation between vanishing points and camera parameters of central catadioptric camera systems. These vanishing points correspond to the three mutually orthogonal directions of 3D real world coordinate system (i.e. X, Y and Z axes). Compared to vanishing points (VPs) in the perspective projection, the advantages of VPs under central catadioptric projection are that there are normally two vanishing points for each set of parallel lines, since lines are projected to conics in the catadioptric image plane. Also, their vanishing points are usually located inside the image frame. We show that knowledge of the VPs corresponding to XYZ axes from a single image can lead to simple derivation of both intrinsic and extrinsic parameters of the central catadioptric system. This derived novel theory is demonstrated and tested on both synthetic and real data with respect to noise sensitivity.

  15. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  16. GAMPIX: A new generation of gamma camera

    Science.gov (United States)

    Gmar, M.; Agelou, M.; Carrel, F.; Schoepff, V.

    2011-10-01

    Gamma imaging is a technique of great interest in several fields such as homeland security or decommissioning/dismantling of nuclear facilities in order to localize hot spots of radioactivity. In the nineties, previous works led by CEA LIST resulted in the development of a first generation of gamma camera called CARTOGAM, now commercialized by AREVA CANBERRA. Even if its performances can be adapted to many applications, its weight of 15 kg can be an issue. For several years, CEA LIST has been developing a new generation of gamma camera, called GAMPIX. This system is mainly based on the Medipix2 chip, hybridized to a 1 mm thick CdTe substrate. A coded mask replaces the pinhole collimator in order to increase the sensitivity of the gamma camera. Hence, we obtained a very compact device (global weight less than 1 kg without any shielding), which is easy to handle and to use. In this article, we present the main characteristics of GAMPIX and we expose the first experimental results illustrating the performances of this new generation of gamma camera.

  17. Camera Layout Design for the Upper Stage Thrust Cone

    Science.gov (United States)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  18. Raspberry Pi camera with intervalometer used as crescograph

    Science.gov (United States)

    Albert, Stefan; Surducan, Vasile

    2017-12-01

    The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.

  19. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  20. Seeing elements by visible-light digital camera.

    Science.gov (United States)

    Zhao, Wenyang; Sakurai, Kenji

    2017-03-31

    A visible-light digital camera is used for taking ordinary photos, but with new operational procedures it can measure the photon energy in the X-ray wavelength region and therefore see chemical elements. This report describes how one can observe X-rays by means of such an ordinary camera - The front cover of the camera is replaced by an opaque X-ray window to block visible light and to allow X-rays to pass; the camera takes many snap shots (called single-photon-counting mode) to record every photon event individually; an integrated-filtering method is newly proposed to correctly retrieve the energy of photons from raw camera images. Finally, the retrieved X-ray energy-dispersive spectra show fine energy resolution and great accuracy in energy calibration, and therefore the visible-light digital camera can be applied to routine X-ray fluorescence measurement to analyze the element composition in unknown samples. In addition, the visible-light digital camera is promising in that it could serve as a position sensitive X-ray energy detector. It may become able to measure the element map or chemical diffusion in a multi-element system if it is fabricated with external X-ray optic devices. Owing to the camera's low expense and fine pixel size, the present method will be widely applied to the analysis of chemical elements as well as imaging.

  1. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  2. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  3. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  4. Time-of-Flight Microwave Camera.

    Science.gov (United States)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  5. A comparative study of microscopic images captured by a box type digital camera versus a standard microscopic photography camera unit.

    Science.gov (United States)

    Desai, Nandini J; Gupta, B D; Patel, Pratik Narendrabhai; Joshi, Vani Santosh

    2014-10-01

    Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit.

  6. An Inexpensive Digital Infrared Camera

    Science.gov (United States)

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  7. Geometrical camera calibration with diffractive optical elements.

    Science.gov (United States)

    Bauer, M; Griessbach, D; Hermerschmidt, A; Krüger, S; Scheele, M; Schischmanow, A

    2008-12-08

    Traditional methods for geometrical camera calibration are based on calibration grids or single pixel illumination by collimated light. A new method for geometrical sensor calibration by means of diffractive optical elements (DOE) in connection with a laser beam equipment is presented. This method can be especially used for 2D-sensor array systems but in principle also for line scanners. (c) 2008 Optical Society of America

  8. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  9. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  10. 3-D Flow Visualization with a Light-field Camera

    Science.gov (United States)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  11. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  12. New calibration technique for a novel stereo camera

    Science.gov (United States)

    Tu, Xue; S