WorldWideScience

Sample records for macho camera system

  1. Evaluating intensified camera systems

    Energy Technology Data Exchange (ETDEWEB)

    S. A. Baker

    2000-07-01

    This paper describes image evaluation techniques used to standardize camera system characterizations. Key areas of performance include resolution, noise, and sensitivity. This team has developed a set of analysis tools, in the form of image processing software used to evaluate camera calibration data, to aid an experimenter in measuring a set of camera performance metrics. These performance metrics identify capabilities and limitations of the camera system, while establishing a means for comparing camera systems. Analysis software is used to evaluate digital camera images recorded with charge-coupled device (CCD) cameras. Several types of intensified camera systems are used in the high-speed imaging field. Electro-optical components are used to provide precise shuttering or optical gain for a camera system. These components including microchannel plate or proximity focused diode image intensifiers, electro-static image tubes, or electron-bombarded CCDs affect system performance. It is important to quantify camera system performance in order to qualify a system as meeting experimental requirements. The camera evaluation tool is designed to provide side-by-side camera comparison and system modeling information.

  2. The MACHO data pipeline

    CERN Document Server

    Axelrod, T S; Quinn, P J; Bennett, D P; Freeman, K C; Peterson, B A; Rodgers, A W; Alcock, C B; Cook, K H; Griest, K; Marshall, S L; Pratt, M R; Stubbs, C W; Sutherland, W

    1995-01-01

    The MACHO experiment is searching for dark matter in the halo of the Galaxy by monitoring more than 20 million stars in the LMC and Galactic bulge for gravitational microlensing events. The hardware consists of a 50 inch telescope, a two-color 32 megapixel ccd camera, and a network of computers. On clear nights the system generates up to 8 GB of raw data and 1 GB of reduced data. The computer system is responsible for all realtime control tasks, for data reduction, and for storing all data associated with each observation in a data base. The subject of this paper is the software system that handles these functions. It is an integrated system controlled by Petri nets that consists of multiple processes communicating via mailboxes and a bulletin board. The system is highly automated, readily extensible, and incorporates flexible error recovery capabilities. It is implemented with C++ in a Unix environment.

  3. Combustion pinhole camera system

    Science.gov (United States)

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  4. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  5. Automatic tracking sensor camera system

    Science.gov (United States)

    Tsuda, Takao; Kato, Daiichiro; Ishikawa, Akio; Inoue, Seiki

    2001-04-01

    We are developing a sensor camera system for automatically tracking and determining the positions of subjects moving in three-dimensions. The system is intended to operate even within areas as large as soccer fields. The system measures the 3D coordinates of the object while driving the pan and tilt movements of camera heads, and the degree of zoom of the lenses. Its principal feature is that it automatically zooms in as the object moves farther away and out as the object moves closer. This maintains the area of the object as a fixed position of the image. This feature makes stable detection by the image processing possible. We are planning to use the system to detect the position of a soccer ball during a soccer game. In this paper, we describe the configuration of the developing automatic tracking sensor camera system. We then give an analysis of the movements of the ball within images of games, the results of experiments on method of image processing used to detect the ball, and the results of other experiments to verify the accuracy of an experimental system. These results show that the system is sufficiently accurate in terms of obtaining positions in three-dimensions.

  6. Combustion pinhole-camera system

    Science.gov (United States)

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  7. Illumination box and camera system

    Science.gov (United States)

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  8. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  9. Radiation camera motion correction system

    Science.gov (United States)

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  10. Development of biostereometric experiments. [stereometric camera system

    Science.gov (United States)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  11. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  12. New camera tube improves ultrasonic inspection system

    Science.gov (United States)

    Berger, H.; Collis, W. J.; Jacobs, J. E.

    1968-01-01

    Electron multiplier, incorporated into the camera tube of an ultrasonic imaging system, improves resolution, effectively shields low level circuits, and provides a high level signal input to the television camera. It is effective for inspection of metallic materials for bonds, voids, and homogeneity.

  13. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  14. Mini gamma camera, camera system and method of use

    Science.gov (United States)

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  15. Camera Systems Rapidly Scan Large Structures

    Science.gov (United States)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  16. National Guidelines for Digital Camera Systems Certification

    Science.gov (United States)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  17. Imaging characteristics of photogrammetric camera systems

    Science.gov (United States)

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  18. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA

    Directory of Open Access Journals (Sweden)

    Veena G.S

    2013-12-01

    Full Text Available The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object” using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Viola-Jones algorithm implementation in OpenCV, we teach the machine to identify the object in environmental conditions. An added feature of face recognition is based on Principal Component Analysis (PCA to generate Eigen Faces and the test images are verified by using distance based algorithm against the eigenfaces, like Euclidean distance algorithm or Mahalanobis Algorithm. If the object is misplaced, or an unauthorized user is in the extreme vicinity of the object, an alarm signal is raised.

  19. Calibration method for a central catadioptric-perspective camera system.

    Science.gov (United States)

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  20. Heliostat kinematic system calibration using uncalibrated cameras

    Science.gov (United States)

    Burisch, Michael; Gomez, Luis; Olasolo, David; Villasante, Cristobal

    2017-06-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision accurate knowledge of the motion of each of them modeled as a kinematic system is required. Determining the parameters of this system for each heliostat by a calibration system is crucial for the efficient operation of the solar field. For small sized heliostats being able to make such a calibration in a fast and automatic manner is imperative as the solar field potentially contain tens or even hundreds of thousands of them. A calibration system which can rapidly recalibrate a whole solar field would also allow reducing costs. Heliostats are generally designed to provide stability over a large period of time. Being able to relax this requirement and compensate any occurring error by adapting parameters in a model, the costs of the heliostat can be reduced. The presented method describes such an automatic calibration system using uncalibrated cameras rigidly attached to each heliostat. The cameras are used to observe targets spread out through the solar field; based on this the kinematic system of the heliostat can be estimated with high precision. A comparison of this approach to similar solutions shows the viability of the proposed solution.

  1. Multi-band infrared camera systems

    Science.gov (United States)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  2. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  3. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  4. Flexible high-performance IR camera systems

    Science.gov (United States)

    Hoelter, Theodore R.; Petronio, Susan M.; Carralejo, Ronald J.; Frank, Jeffery D.; Graff, John H.

    1999-07-01

    Indigo Systems Corporation has developed a family of standard readout integrated circuits (ROIC) for use in IR focal plane arrays (FPAs) imaging systems. These standard ROICs are designed to provide a compete set of operating features for camera level FPA control, while also providing high performance capability with any of several detector materials. By creating a uniform electrical interface for FPAs, these standard ROICs simplify the task of FPA integration with imaging electronics and physical packages. This paper begins with a brief description of the features of four Indigo standard ROICs and continues with a description of the features, design, and measured performance of indium antimonide, quantum well IR photo- detectors and indium gallium arsenide imaging system built using the described standard ROICs.

  5. Evryscope Robotilter automated camera / ccd alignment system

    Science.gov (United States)

    Ratzloff, Jeff K.; Law, Nicholas M.; Fors, Octavi; Ser, Daniel d.; Corbett, Henry T.

    2016-08-01

    We have deployed a new class of telescope, the Evryscope, which opens a new parameter space in optical astronomy - the ability to detect short time scale events across the entire sky simultaneously. The system is a gigapixel-scale array camera with an 8000 sq. deg. field of view, 13 arcsec per pixel sampling, and the ability to detect objects brighter than g = 16 in each 2-minute exposure. The Evryscope is designed to find transiting exoplanets around exotic stars, as well as detect nearby supernovae and provide continuous records of distant relativistic explosions like gamma-ray-bursts. The Evryscope uses commercially available CCDs and optics; the machine and assembly tolerances inherent in the mass production of these parts introduce problematic variations in the lens / CCD alignment which degrades image quality. We have built an automated alignment system (Robotilters) to solve this challenge. In this paper we describe the Robotilter system, mechanical and software design, image quality improvement, and current status.

  6. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  7. Optimal Camera Placement for Motion Capture Systems.

    Science.gov (United States)

    Rahimian, Pooya; Kearney, Joseph K

    2017-03-01

    Optical motion capture is based on estimating the three-dimensional positions of markers by triangulation from multiple cameras. Successful performance depends on points being visible from at least two cameras and on the accuracy of the triangulation. Triangulation accuracy is strongly related to the positions and orientations of the cameras. Thus, the configuration of the camera network has a critical impact on performance. A poor camera configuration may result in a low quality three-dimensional (3D) estimation and consequently low quality of tracking. This paper introduces and compares two methods for camera placement. The first method is based on a metric that computes target point visibility in the presence of dynamic occlusion from cameras with "good" views. The second method is based on the distribution of views of target points. Efficient algorithms, based on simulated annealing, are introduced for estimating the optimal configuration of cameras for the two metrics and a given distribution of target points. The accuracy and robustness of the algorithms are evaluated through both simulation and empirical measurement. Implementations of the two methods are available for download as tools for the community.

  8. The ITER Radial Neutron Camera Detection System

    Science.gov (United States)

    Marocco, D.; Belli, F.; Bonheure, G.; Esposito, B.; Kaschuck, Y.; Petrizzi, L.; Riva, M.

    2008-03-01

    A multichannel neutron detection system (Radial Neutron Camera, RNC) will be installed on the ITER equatorial port plug 1 for total neutron source strength, neutron emissivity/ion temperature profiles and nt/nd ratio measurements [1]. The system is composed by two fan shaped collimating structures: an ex-vessel structure, looking at the plasma core, containing tree sets of 12 collimators (each set lying on a different toroidal plane), and an in-vessel structure, containing 9 collimators, for plasma edge coverage. The RNC detecting system will work in a harsh environment (neutron fiux up to 108-109 n/cm2 s, magnetic field >0.5 T or in-vessel detectors), should provide both counting and spectrometric information and should be flexible enough to cover the high neutron flux dynamic range expected during the different ITER operation phases. ENEA has been involved in several activities related to RNC design and optimization [2,3]. In the present paper the up-to-date design and the neutron emissivity reconstruction capabilities of the RNC will be described. Different options for detectors suitable for spectrometry and counting (e.g. scintillators and diamonds) focusing on the implications in terms of overall RNC performance will be discussed. The increase of the RNC capabilities offered by the use of new digital data acquisition systems will be also addressed.

  9. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  10. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  11. Disposition of camera parameters in vehicle navigation system

    Science.gov (United States)

    Yu, Houyun; Zhang, Weigong

    2010-10-01

    To resolve the calibration of onboard camera in the vehicle navigation system based on machine vision, a respective method for disposing of intrinsic and extrinsic parameters of the camera is presented. In view of that the intrinsic parameters are basically invariable during the car's moving, they can be firstly calibrated with a planar pattern as soon as the camera is installed. The installation location of onboard camera can be real-time adjusted according to the slope and vanishing point of lane lines in the picture. Then the quantity of such extrinsic parameters as direction angle, incline angle and level translation are adjusted to zero. This respective disposing method for camera parameters is applied to lane departure detection on the structural road, with which camera calibration is simplified and the measuring error due to extrinsic parameters is decreased. The correctness and feasibility of the method is proved by theoretical calculation and practical experiment.

  12. THE FLY’S EYE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    László Mészáros

    2014-01-01

    Full Text Available Hacemos una introducci ́on del Fly’s Eye Camera System, un di spositivo de monitorizaci ́on de todo cielo con el prop ́osito de realizar astronom ́ıa de dominio temporal. Es te dise ̃no de sistema de c ́amaras proveer ́a conjuntos de datos complementarios a otros sondeos sin ́opticos como L SST o Pan-STARRS. El campo de visi ́on efectivo se obtiene con 19 c ́amaras dispuestas en un mosaico de forma e sf ́erica. Dichas c ́amaras del dispositivo se apoyan en una montura hexapodal que es completamente capaz d e hacer seguimiento sid ́ereo para exposiciones consecutivas. Esta plataforma tiene muchas ventajas. Prim ero, s ́olo requiere un componente m ́ovil y no incluye partes ́unicas. Por lo tanto este dise ̃no no s ́olo elimina lo s problemas causados por elementos ́unicos, sino que la redundancia del hex ́apodo permite una operaci ́on sin pro blemas incluso si una o dos de las piernas est ́an atoradas. Otra ventaja es que se puede calibrar a si mismo med iante estrellas observadas independientemente de su ubicaci ́on geogr ́afica como de la alineaci ́on polar de la m ontura. Todos los elementos mec ́anicos y electr ́onicos est ́an dise ̃nados dentro de nuestro instituto del Observat orio Konkoly. Actualmente, nuestro instrumento est ́a en fase de pruebas con un hex ́apodo operativo y un n ́umero red ucido de c ́amaras.

  13. Theodolite-camera videometrics system based on total station

    Science.gov (United States)

    Zhu, Zhao-kun; Yuan, Yun; Zhang, Xiao-hu

    2011-08-01

    A novel measuring system, named Theodolite-camera Videometrics System (TVS) based on total station, has been introduced in this paper, and the concept of theodolite-camera which is the key component of TVS has been proposed, it consists of non-metric camera and rotation platform generally, and can rotate horizontally and vertically. TVS based on total station is free of field control points, and the fields of view of its theodolite-cameras are nonfixed, thus TVS is qualified for targets with wide moving range or big structure. Theodolite-camera model has been analyzed and presented in detail in this paper. The calibration strategy adopted has been demonstrated to be accurate and feasible by both simulated and real data, and TVS has also been proved to be a valid, reliable, precise measuring system, and living up to expectations.

  14. Priming Macho Attitudes and Emotions.

    Science.gov (United States)

    Beaver, Erik D.; And Others

    1992-01-01

    Investigated the effects of reading one of four priming stimuli stories (control, consenting sex, rape, or family) on males' evaluations of, and emotional reactions to, two videotaped date-rape scenarios. Results supported the concepts of a macho personality and revealed interactive effects for both the rape and family prime. (RJM)

  15. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  16. Automatic Traffic Monitoring from an Airborne Wide Angle Camera System

    OpenAIRE

    Rosenbaum, Dominik; Charmette, Baptiste; Kurz, Franz; Suri, Sahil; Thomas, Ulrike; Reinartz, Peter

    2008-01-01

    We present an automatic traffic monitoring approach using data of an airborne wide angle camera system. This camera, namely the “3K-Camera”, was recently developed at the German Aerospace Center (DLR). It has a coverage of 8 km perpendicular to the flight direction at a flight height of 3000 m with a resolution of 45 cm and is capable to take images at a frame rate of up to 3 fps. Based on georeferenced images obtained from this camera system, a near real-time processing chain containing roa...

  17. Integrated radar-camera security system: range test

    Science.gov (United States)

    Zyczkowski, M.; Szustakowski, M.; Ciurapinski, W.; Karol, M.; Markowski, P.

    2012-06-01

    The paper presents the test results of a mobile system for the protection of large-area objects, which consists of a radar and thermal and visual cameras. Radar is used for early detection and localization of an intruder and the cameras with narrow field of view are used for identification and tracking of a moving object. The range evaluation of an integrated system is presented as well as the probability of human detection as a function of the distance from radar-camera unit.

  18. Integrated mobile radar-camera system in airport perimeter security

    Science.gov (United States)

    Zyczkowski, M.; Szustakowski, M.; Ciurapinski, W.; Dulski, R.; Kastek, M.; Trzaskawka, P.

    2011-11-01

    The paper presents the test results of a mobile system for the protection of large-area objects, which consists of a radar and thermal and visual cameras. Radar is used for early detection and localization of an intruder and the cameras with narrow field of view are used for identification and tracking of a moving object. The range evaluation of an integrated system are presented as well as the probability of human detection as a function of the distance from radar-camera unit.

  19. A stereo camera system for autonomous maritime navigation (AMN) vehicles

    Science.gov (United States)

    Zhang, Weihong; Zhuang, Ping; Elkins, Les; Simon, Rick; Gore, David; Cogar, Jeff; Hildebrand, Kevin; Crawford, Steve; Fuller, Joe

    2009-05-01

    Spatial Integrated System (SIS), Rockville, Maryland, in collaboration with NSWC Combatant Craft Division (NSWCCD), is applying 3D imaging technology, artificial intelligence, sensor fusion, behaviors-based control, and system integration to a prototype 40 foot, high performance Research and Development Unmanned Surface Vehicle (USV). This paper focus on the developments of the stereo camera system in the USV navigation that currently consists of two high-resolution cameras and will incorporate an array of cameras in the near future. The objectives of the camera system are to re-construct 3D objects and detect them in the sea water surface. The paper reviews two critical technological components, namely camera calibration and stereo matching. In stereo matching, a comprehensive study is presented to compare the algorithmic performances resulted from the various information sources (intensity, RGB values, Gaussian gradients and Gaussian Laplacians), patching schemas (single windows, and multiple windows with same/different centers), and correlation metrics (convolution, absolute difference, and histogram). To enhance system performance, a sub-pixel edge detection technique has been introduced to address the precision requirement and a noise removal post-processing step added to eliminate noisy points from the reconstructed 3D point clouds. Finally, experimental results are reported to demonstrate the performance of the stereo camera system.

  20. Epipolar rectification method for a stereovision system with telecentric cameras

    Science.gov (United States)

    Liu, Haibo; Zhu, Zhaokun; Yao, Linshen; Dong, Jin; Chen, Shengyi; Zhang, Xiaohu; Shang, Yang

    2016-08-01

    3D metrology of a stereovision system requires epipolar rectification to be performed before dense stereo matching. In this study, we propose an epipolar rectification method for a stereovision system with two telecentric lens-based cameras. Given the orthographic projection matrices of each camera, the new projection matrices are computed by determining the new camera coordinates system in affine space and imposing some constraints on the intrinsic parameters. Then, the transformation that maps the old image planes on to the new image planes is achieved. Experiments are performed to validate the performance of the proposed rectification method. The test results show that the perpendicular distance and 3D reconstructed deviation obtained from the rectified images is not significantly higher than the corresponding values obtained from the original images. Considering the roughness of the extracted corner points and calibrated camera parameters, we can conclude that the proposed method can provide sufficiently accurate rectification results.

  1. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  2. Multi-camera system for 3D forensic documentation.

    Science.gov (United States)

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©.

  3. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  4. Single port access surgery with a novel Port Camera system.

    Science.gov (United States)

    Terry, Benjamin S; Schoen, Jonathan; Mills, Zachary; Rentschler, Mark E

    2012-06-01

    In this work, the authors designed, built, and tested a novel port camera system for single port access (SPA) laparoscopic surgery. This SPA Port Camera device integrates the monitor, laparoscopic camera, and light source into an inexpensive, portable cannula port. The device uses a 2-channel SPA port inserted through an umbilical incision, similar to traditional SPA. After insertion into a channel, the device deploys a small camera module and LED lamp in vivo. An integrated, on-patient LCD provides the view of the surgical site. The design intent of the port camera is to enhance SPA by (a) reducing the size of the SPA port through the elimination of the dedicated laparoscope channel; (b) reducing equipment cost by integrating an inexpensive CMOS sensor and LED lamp at the port tip; (c) eliminating the need for an assistant who operates the laparoscope; and (d) mechanically coupling the camera, tool port, and on-patient LCD screen. The effectiveness of the device was evaluated by comparing the video performance with a leading industry laparoscope and by performing a user evaluation study and live porcine surgery with the device. Effectiveness of the device was mixed. Overall video system performance of the device is better than an industry standard high-definition laparoscope, implying that significant cost savings over a traditional system are possible. Participant study results suggest that simulated laparoscopic tasks are as efficient with the SPA Port Camera as they are with a typical SPA configuration. However, live surgery revealed several shortcomings of the SPA Port Camera.

  5. Calibration and investigation of infrared camera systems applying blackbody radiation

    Science.gov (United States)

    Hartmann, Juergen; Fischer, Joachim

    2001-03-01

    An experimental facility is presented, which allows calibration and detailed investigation of infrared camera systems. Various blackbodies operating in the temperature range from -60 degree(s)C up to 3000 degree(s)C serve as standard radiation sources, enabling calibration of camera systems in a wide temperature and spectral range with highest accuracy. Quantitative results and precise long-term investigations, especially in detecting climatic trends, require accurate traceability to the International Temperature Scale of 1990 (ITS-90). For the used blackbodies the traceability to ITS- 90 is either realized by standard platinum resistance thermometers (in the temperature range below 962 degree(s)C) or by absolute and relative radiometry (in the temperature range above 962 degree(s)C). This traceability is fundamental for implementation of quality assurance systems and realization of different standardizations, for example according ISO 9000. For investigation of the angular and the temperature resolution our set-up enables minimum resolvable (MRTD) and minimum detectable temperature difference (MDTD) measurements in the various temperature ranges. A collimator system may be used to image the MRTD and MDTD targets to infinity. As internal calibration of infrared camera systems critically depends on the temperature of the surrounding, the calibration and investigation of the cameras is performed in a climate box, which allows a detailed controlling of the environmental parameters like humidity and temperature. Experimental results obtained for different camera systems are presented and discussed.

  6. Stereo Calibration and Rectification for Omnidirectional Multi-camera Systems

    Directory of Open Access Journals (Sweden)

    Yanchang Wang

    2012-10-01

    Full Text Available Stereo vision has been studied for decades as a fundamental problem in the field of computer vision. In recent years, computer vision and image processing with a large field of view, especially using omnidirectional vision and panoramic images, has been receiving increasing attention. An important problem for stereo vision is calibration. Although various kinds of calibration methods for omnidirectional cameras are proposed, most of them are limited to calibrate catadioptric cameras or fish‐eye cameras and cannot be applied directly to multi‐camera systems. In this work, we propose an easy calibration method with closed‐form initialization and iterative optimization for omnidirectional multi‐camera systems. The method only requires image pairs of the 2D target plane in a few different views. A method based on the spherical camera model is also proposed for rectifying omnidirectional stereo pairs. Using real data captured by Ladybug3, we carry out some experiments, including stereo calibration, rectification and 3D reconstruction. Statistical analyses and comparisons of the experimental results are also presented. As the experimental results show, the calibration results are precise and the effect of rectification is promising.

  7. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    Science.gov (United States)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic

  8. Space camera optical axis pointing precision measurement system

    Science.gov (United States)

    Chen, Gang; Meng, Fanbo; Yang, Zijun; Guo, Yubo; Ye, Dong

    2016-01-01

    In order to realize the space camera which on satellite optical axis pointing precision measurement, a monocular vision measurement system based on object-image conjugate is established. In this system the algorithms such as object-image conjugate vision models and point by point calibration method are applied and have been verified. First, the space camera axis controller projects a laser beam to the standard screen for simulating the space camera's optical axis. The laser beam form a target point and has been captured by monocular vision camera. Then the two-dimensional coordinates of the target points on the screen are calculated by a new vision measurement model which based on a looking-up and matching table, the table has been generated by object-image conjugate algorithm through point by point calibration. Finally, compare the calculation of coordinates offered by measurement system with the theory of coordinate offered by optical axis controller, the optical axis pointing precision can be evaluated. Experimental results indicate that the absolute precision of measurement system up to 0.15mm in 2m×2m FOV. This measurement system overcome the nonlinear distortion near the edge of the FOV and can meet the requirement of space camera's optical axis high precision measurement and evaluation.

  9. A luminescence imaging system based on a CCD camera

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Markey, B.G.

    1997-01-01

    to photographic systems, in order to obtain spatially resolved data. However, the former option is extremely expensive and it is difficult to obtain quantitative data from the latter. This paper describes the use of a CCD camera for imaging both thermoluminescence and optically stimulated luminescence. The system...... described here has a maximum spatial resolution of 17 mu m; though this may be varied under software control to alter the signal-to-noise ratio. The camera has been mounted on a Riso automated TL/OSL reader, and both the reader and the CCD are under computer control. In the near u.v and blue part...

  10. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  11. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  12. Design and implementation of digital airborne multispectral camera system

    Science.gov (United States)

    Lin, Zhaorong; Zhang, Xuguo; Wang, Li; Pan, Deai

    2012-10-01

    The multispectral imaging equipment is a kind of new generation remote sensor, which can obtain the target image and the spectra information simultaneously. A digital airborne multispectral camera system using discrete filter method had been designed and implemented for unmanned aerial vehicle (UAV) and manned aircraft platforms. The digital airborne multispectral camera system has the advantages of larger frame, higher resolution, panchromatic and multispectral imaging. It also has great potential applications in the fields of environmental and agricultural monitoring and target detection and discrimination. In order to enhance the measurement precision and accuracy of position and orientation, Inertial Measurement Unit (IMU) is integrated in the digital airborne multispectral camera. Meanwhile, the Temperature Control Unit (TCU) guarantees that the camera can operate in the normal state in different altitudes to avoid the window fogging and frosting which will degrade the imaging quality greatly. Finally, Flying experiments were conducted to demonstrate the functionality and performance of the digital airborne multispectral camera. The resolution capability, positioning accuracy and classification and recognition ability were validated.

  13. Overview of a Hybrid Underwater Camera System

    Science.gov (United States)

    2014-07-01

    integrated HUC system. As part of the HUC system, the Navigator display is also transmitted to a monocular display installed on a diver’s helmet. An...feet in length, 6.5 feet in width with a maximum depth of 8 feet. Pumps are used to generate a current in the flume. The water holds particulate matter

  14. Smart Camera System for Aircraft and Spacecraft

    Science.gov (United States)

    Delgado, Frank; White, Janis; Abernathy, Michael F.

    2003-01-01

    This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SC3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results. Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and awareness. The system created to date provides a real-time operations personnel an appropriate level of situation 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using "static" data acquired by an aircraft or satellite at some point in the past. The SC3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1. The SC3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.

  15. TENTACLE: Multi-Camera Immersive Surveillance System

    Science.gov (United States)

    2011-12-01

    standard format by which blog entries, news headlines, audio, and video is disseminated via the web . SAR Synthetic Aperture Radar SBIR Small Business...Thru. TIGR Tactical Ground Reporting System, a web -based information sharing system available to the United States Army TIPL Tentacle IPL TM...Earth for development due to our past experience developing with it, and the maturity of the Tentacle user interface mockup we created (located at

  16. Target-Tracking Camera for a Metrology System

    Science.gov (United States)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  17. Integrated radar-camera security system: experimental results

    Science.gov (United States)

    Zyczkowski, M.; Palka, N.; Trzcinski, T.; Dulski, R.; Kastek, M.; Trzaskawka, P.

    2011-06-01

    The nature of the recent military conflicts and terrorist attacks along with the necessity to protect bases, convoys and patrols have made a serious impact on the development of more effective security systems. Current widely-used perimeter protection systems with zone sensors will soon be replaced with multi-sensor systems. Multi-sensor systems can utilize day/night cameras, IR uncooled thermal cameras, and millimeter-wave radars which detect radiation reflected from targets. Ranges of detection, recognition and identification for all targets depend on the parameters of the sensors used and of the observed scene itself. In this paper two essential issues connected with multispectral systems are described. We will focus on describing the autonomous method of the system regarding object detection, tracking, identification, localization and alarm notifications. We will also present the possibility of configuring the system as a stationary, mobile or portable device as in our experimental results.

  18. Crisis Management Using Multiple Camera Surveillance Systems

    OpenAIRE

    Rothkrantz , L.J.M.

    2013-01-01

    During recent disasters such as tsunami, flooding, hurricanes, nuclear disaster, earthquake people have to leave their living areas for their own safety. But it proves that some people are not informed about the evacuation, or are not willing or able to leave or don’t know how to leave the hazardous areas. The topic of the paper is how to adapt current video surveillance systems along highway and streets to semi-automatic surveillance systems. When a suspicious event is detected a human opera...

  19. Design of microcontroller based system for automation of streak camera

    Science.gov (United States)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  20. Design of microcontroller based system for automation of streak camera.

    Science.gov (United States)

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  1. Metrology Camera System of Prime Focus Spectrograph for Subaru Telescope

    CERN Document Server

    Wang, Shiang-Yu; Chang, Yin-Chang; Huang, Pin-Jie; Hu, Yen-Sang; Chen, Hsin-Yo; Tamura, Naoyuki; Takato, Naruhisa; Ling, Hung-Hsu; Gunn, James E; Karr, Jennifer; Yan, Chi-Hung; Mao, Peter; Ohyama, Youichi; Karoji, Hiroshi; Sugai, Hajime; Shimono, Atsushi

    2014-01-01

    The Prime Focus Spectrograph (PFS) is a new optical/near-infrared multi-fiber spectrograph designed for the prime focus of the 8.2m Subaru telescope. The metrology camera system of PFS serves as the optical encoder of the COBRA fiber motors for the configuring of fibers. The 380mm diameter aperture metrology camera will locate at the Cassegrain focus of Subaru telescope to cover the whole focal plane with one 50M pixel Canon CMOS sensor. The metrology camera is designed to provide the fiber position information within 5{\\mu}m error over the 45cm focal plane. The positions of all fibers can be obtained within 1s after the exposure is finished. This enables the overall fiber configuration to be less than 2 minutes.

  2. Automatic surveillance system using fish-eye lens camera

    Institute of Scientific and Technical Information of China (English)

    Xue Yuan; Yongduan Song; Xueye Wei

    2011-01-01

    This letter presents an automatic surveillance system using fish-eye lens camera. Our system achieves wide-area automatic surveillance without a dead angle using only one camera. We propose a new human detection method to select the most adaptive classifier based on the locations of the human candidates.Human regions are detected from the fish-eye image effectively and are corrected for perspective versions.An experiment is performed on indoor video sequences with different illumination and crowded conditions,with results demonstrating the efficiency of our algorithm.%@@ This letter presents an automatic surveillance system using fish-eye lens camera. Our system achieves wide-area automatic surveillance without a dead angle using only one camera. We propose a new human detection method to select the most adaptive classifier based on the locations of the human candidates. Human regions are detected from the fish-eye image effectively and are corrected for perspective versions. An experiment is performed on indoor video sequences with different illumination and crowded conditions, with results demonstrating the efficiency of our algorithm.

  3. Design, development and verification of the HIFI Alignment Camera System

    NARCIS (Netherlands)

    Boslooper, E.C.; Zwan, B.A. van der; Kruizinga, B.; Lansbergen, R.

    2005-01-01

    This paper presents the TNO share of the development of the HIFI Alignment Camera System (HACS), covering the opto-mechanical and thermal design. The HACS is an Optical Ground Support Equipment (OGSE) that is specifically developed to verify proper alignment of different modules of the HIFI instrume

  4. Accuracy determination of camera system used for sport motion analysis

    Directory of Open Access Journals (Sweden)

    Bergün Meriç

    2008-10-01

    Full Text Available The aim of this study is to determine accuracy of camera system often used for motion analysis.       In order to accomplish this, an industrial robot was moved with known three different trajectories  and these motions were captured using three 100Hz cameras located in 3 different angles. Video data were digitized and analyzed using Simi Motion Analysis Program. With this program, angular kinematics were computed from the video data and compared with the data obtained from robot.       For considering analysis of the data, average error for angle computed from average values of absolute error and root values of average of squared error is is  0.92° and 1.33°, respectively. Similarly, average error for angular velocity computed from average values of absolute error and root values of average of squared error is is  0.77° and 0.96°, respectively.      These errors may result in the technique of image processing, shot speed of camera system and  the limited hand sensivity of users. As motions in sports were analyzed with the camera systems, these errors must be taken in account in kinematic computation.

  5. Accuracy determination of camera system used for sport motion analysis

    Directory of Open Access Journals (Sweden)

    Bergün Meriç

    2008-10-01

    Full Text Available The aim of this study is to determine accuracy of camera system often used for motion analysis. In order to accomplish this, an industrial robot was moved with known three different trajectories and these motions were captured using three 100Hz cameras located in 3 different angles. Video data were digitized and analyzed using Simi Motion Analysis Program. With this program, angular kinematics were computed from the video data and compared with the data obtained from robot. For considering analysis of the data, average error for angle computed from average values of absolute error and root values of average of squared error is is 0.92° and 1.33°, respectively. Similarly, average error for angular velocity computed from average values of absolute error and root values of average of squared error is is 0.77° and 0.96°, respectively. These errors may result in the technique of image processing, shot speed of camera system and the limited hand sensivity of users. As motions in sports were analyzed with the camera systems, these errors must be taken in account in kinematic computation.

  6. System Architecture of the Dark Energy Survey Camera Readout Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Theresa; /FERMILAB; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; /Barcelona, IFAE; Chappa, Steve; /Fermilab; de Vicente, Juan; /Madrid, CIEMAT; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; /Fermilab; Martinez, Gustavo; /Madrid, CIEMAT; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  7. Design of the iLocater Acquisition Camera Demonstration System

    CERN Document Server

    Bechter, Andrew; Ketterer, Ryan; Crepp, Justin R; King, David; Zhao, Bo; Reynolds, Robert; Hinz, Philip; Brooks, Jack; Bechter, Eric

    2015-01-01

    Existing planet-finding spectrometers are limited by systematic errors that result from their seeing-limited design. Of particular concern is the use of multi-mode fibers (MMFs), which introduce modal noise and accept significant amounts of background radiation from the sky. We present the design of a single-mode fiber-based acquisition camera for a diffraction-limited spectrometer named "iLocater." By using the "extreme" adaptive optics (AO) system of the Large Binocular Telescope (LBT), iLocater will overcome the limitations that prevent Doppler instruments from reaching their full potential, allowing precise radial velocity (RV) measurements of terrestrial planets around nearby bright stars. The instrument presented in this paper, which we refer to as the acquisition camera "demonstration system," will measure on-sky single-mode fiber (SMF) coupling efficiency using one of the 8.4m primaries of the LBT in fall 2015.

  8. System Construction of the Stilbene Compact Neutron Scatter Camera

    Energy Technology Data Exchange (ETDEWEB)

    Goldsmith, John E. M. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Gerling, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brennan, James S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Throckmorton, Daniel J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Helm, Jonathan Ivers [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2016-10-01

    This report documents the construction of a stilbene-crystal-based compact neutron scatter camera. This system is essentially identical to the MINER (Mobile Imager of Neutrons for Emergency Responders) system previously built and deployed under DNN R&D funding,1 but with the liquid scintillator in the detection cells replaced by stilbene crystals. The availability of these two systems for side-by-side performance comparisons will enable us to unambiguously identify the performance enhancements provided by the stilbene crystals, which have only recently become commercially available in the large size required (3” diameter, 3” deep).

  9. Video-Camera-Based Position-Measuring System

    Science.gov (United States)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  10. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Science.gov (United States)

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  11. Pothole Detection System Using a Black-box Camera

    Directory of Open Access Journals (Sweden)

    Youngtae Jo

    2015-11-01

    Full Text Available Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  12. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  13. Calibration and Testing of Digital Zenith Camera System Components

    Science.gov (United States)

    Ulug, Rasit; Halicioglu, Kerem; Tevfik Ozludemir, M.; Albayrak, Muge; Basoglu, Burak; Deniz, Rasim

    2017-04-01

    Starting from the beginning of the new millennium, thanks to the Charged-Coupled Device (CCD) technology, fully or partly automatic zenith camera systems are designed and used in order to determine astro-geodetic deflections of the vertical components in several countries, including Germany, Switzerland, Serbia, Latvia, Poland, Austria, China and Turkey. The Digital Zenith Camera System (DZCS) of Turkey performed successful observations yet it needs to be improved in terms of automating the system and increasing observation accuracy. In order to optimize the observation time and improve the system, some modifications have been implemented. Through the modification process that started at the beginning of 2016, some DZCS components have been replaced with the new ones and some new additional components have been installed. In this presentation, the ongoing calibration and testing process of the DZCS are summarized in general. In particular, one of the tested system components is the High Resolution Tiltmeter (HRTM), which enable orthogonal orientation of DZCS to the direction of plump line, is discussed. For the calibration of these components, two tiltmeters with different accuracies (1 nrad and 0.001 mrad) were observed nearly 30 days. The data recorded under different environmental conditions were divided into hourly, daily, and weekly subsets. In addition to the effects of temperature and humidity, interoperability of two tiltmeters were also investigated. Results show that with the integration of HRTM and the other implementations, the modified DZCS provides higher accuracy for the determination of vertical deflections.

  14. How a Star Cluster Ruled Out MACHOs

    Science.gov (United States)

    Kohler, Susanna

    2016-08-01

    Are massive black holes hiding in the halos of galaxies, making up the majority of the universes mysterious dark matter? This possibility may have been ruled out by a star cluster in a small galaxy recently discovered orbiting the Milky Way.Dark Matter CandidatesThe relative amounts of the different constituents of the universe. Dark matter makes up ~27%. [ESA/Planck]Roughly 27% of the mass and energy in the observable universe is made up of dark matter matter invisible to us, which is neither accounted for by observable baryonic matter nor dark energy.What makes up this dark matter? Among the many proposed candidates, one of the least exotic is that of massive compact halo objects, or MACHOs. MACHOs are hypothesized to be black holes that formed in the early universe and now hide in galactic halos. We cant detect light from these objects but their mass adds to the gravitational pull of galaxies.So far, MACHOs prospects arent looking great. They have not been detected in gravitational lensing surveys, ruling out MACHOs between 10-7 and 30 solar masses as the dominant component of dark matter in our galaxy. MACHOs over 100 solar masses have also been ruled out, due to the existence of fragile wide halo binaries that would have been disrupted by the presence of such large black holes.But what about MACHOs between 30 and 100 solar masses? In a new study, Timothy Brandt (NASA Sagan Postdoctoral Fellow at the Institute for Advanced Study, in Princeton, NJ) uses a recently discovered faint galaxy, Eridanus II, to place constraints on MACHOs in this mass range.MACHO constraints from the survival of a star cluster in Eri II, assuming a cluster age of 3 Gyr (a lower bound; constraints increase when assuming an age of 12 Gyr). [Adapted from Brandt 2016]A Star Cluster in Eri IIEridanus II is an ultra-faint dwarf galaxy that lies roughly 1.2 million light-years away from us. This dim object is a satellite galaxy of the Milky Way, discovered as part of the Dark Energy Survey

  15. Usability of a Wearable Camera System for Dementia Family Caregivers

    Directory of Open Access Journals (Sweden)

    Judith T. Matthews

    2015-01-01

    Full Text Available Health care providers typically rely on family caregivers (CG of persons with dementia (PWD to describe difficult behaviors manifested by their underlying disease. Although invaluable, such reports may be selective or biased during brief medical encounters. Our team explored the usability of a wearable camera system with 9 caregiving dyads (CGs: 3 males, 6 females, 67.00 ± 14.95 years; PWDs: 2 males, 7 females, 80.00 ± 3.81 years, MMSE 17.33 ± 8.86 who recorded 79 salient events over a combined total of 140 hours of data capture, from 3 to 7 days of wear per CG. Prior to using the system, CGs assessed its benefits to be worth the invasion of privacy; post-wear privacy concerns did not differ significantly. CGs rated the system easy to learn to use, although cumbersome and obtrusive. Few negative reactions by PWDs were reported or evident in resulting video. Our findings suggest that CGs can and will wear a camera system to reveal their daily caregiving challenges to health care providers.

  16. Development of a stereo camera system for road surface assessment

    Science.gov (United States)

    Su, D.; Nagayama, T.; Irie, M.; Fujino, Y.

    2013-04-01

    In Japan, large number of road structures which were built in the period of high economic growth, has been deteriorated due to heavy traffic and severe conditions, especially in the metropolitan area. In particular, the poor condition of expansion joints of the bridge caused by the frequent impact from the passing vehicles has significantly influence the vehicle safety. In recent year, stereo vision is a widely researched and implemented monitoring approach in object recognition field. This paper introduces the development of a stereo camera system for road surface assessment. In this study, first the static photos taken by a calibrated stereo camera system are utilized to reconstruct the three-dimensional coordinates of targets in the pavement. Subsequently to align the various coordinates obtained from different view meshes, one modified Iterative Closet Point method is proposed by affording the appropriate initial conditions and image correlation method. Several field tests have been carried out to evaluate the capabilities of this system. After succeeding to align all the measured coordinates, this system can offer not only the accurate information of local deficiency such as the patching, crack or pothole, but also global fluctuation in a long distance range of the road surface.

  17. A two-camera imaging system for pest detection and aerial application

    Science.gov (United States)

    This presentation reports on the design and testing of an airborne two-camera imaging system for pest detection and aerial application assessment. The system consists of two digital cameras with 5616 x 3744 effective pixels. One camera captures normal color images with blue, green and red bands, whi...

  18. FPGA-based data acquisition system for a Compton camera

    Science.gov (United States)

    Nurdan, K.; Çonka-Nurdan, T.; Besch, H. J.; Freisleben, B.; Pavel, N. A.; Walenta, A. H.

    2003-09-01

    A data acquisition (DAQ) system with custom back-plane and custom readout boards has been developed for a Compton camera prototype. The DAQ system consists of two layers. The first layer has units for parallel high-speed analog-to-digital conversion and online data pre-processing. The second layer has a central board to form a general event trigger and to build the data structure for the event. This modularity and the use of field programmable gate arrays make the whole DAQ system highly flexible and adaptable to modified experimental setups. The design specifications, the general architecture of the Trigger and DAQ system and the implemented readout protocols are presented in this paper.

  19. Comparison of two real-time hand gesture recognition systems involving stereo cameras, depth camera, and inertial sensor

    Science.gov (United States)

    Liu, Kui; Kehtarnavaz, Nasser; Carlsohn, Matthias

    2014-05-01

    This paper presents a comparison of two real-time hand gesture recognition systems. One system utilizes a binocular stereo camera set-up while the other system utilizes a combination of a depth camera and an inertial sensor. The latter system is a dual-modality system as it utilizes two different types of sensors. These systems have been previously developed in the Signal and Image Processing Laboratory at the University of Texas at Dallas and the details of the algorithms deployed in these systems are reported in previous papers. In this paper, a comparison is carried out between these two real-time systems in order to examine which system performs better for the same set of hand gestures under realistic conditions.

  20. Metrology Camera System of Prime Focus Spectrograph for Subaru Telescope

    CERN Document Server

    Wang, Shiang-Yu; Huang, Pin-Jie; Ling, Hung-Hsu; Karr, Jennifer; Chang, Yin-Chang; Hu, Yen-Shan; Hsu, Shu-Fu; Chen, Hsin-Yo; Gunn, James E; Reiley, Dan J; Tamura, Naoyuki; Takato, Naruhisa; Shimono, Atsushi

    2016-01-01

    The Prime Focus Spectrograph (PFS) is a new optical/near-infrared multi-fiber spectrograph designed for the prime focus of the 8.2m Subaru telescope. PFS will cover a 1.3 degree diameter field with 2394 fibers to complement the imaging capabilities of Hyper SuprimeCam. To retain high throughput, the final positioning accuracy between the fibers and observing targets of PFS is required to be less than 10um. The metrology camera system (MCS) serves as the optical encoder of the fiber motors for the configuring of fibers. MCS provides the fiber positions within a 5um error over the 45 cm focal plane. The information from MCS will be fed into the fiber positioner control system for the closed loop control. MCS will be located at the Cassegrain focus of Subaru telescope in order to to cover the whole focal plane with one 50M pixel Canon CMOS camera. It is a 380mm Schmidt type telescope which generates a uniform spot size with a 10 micron FWHM across the field for reasonable sampling of PSF. Carbon fiber tubes are ...

  1. Vision system for driving control using camera mounted on an automatic vehicle. Jiritsu sokosha no camera ni yoru shikaku system

    Energy Technology Data Exchange (ETDEWEB)

    Nishimori, K.; Ishihara, K.; Tokutaka, H.; Kishida, S.; Fujimura, K. (Tottori University, Tottori (Japan). Faculty of Engineering); Okada, M. (Mazda Corp., Hiroshima (Japan)); Hirakawa, S. (Fujitsu Corp., Tokyo (Japan))

    1993-11-30

    The present report explains a vision system, in which a CCD camera, used for the model vehicle automatically traveling by fuzzy control, is used as a vision sensor. The vision system is composed of input image processing module, situation recognition/analysis module to three-dimensionally recover the road, route-selecting navigation module to avoid the obstacle and vehicle control module. The CCD camera is used as a vision sensor to make the model vehicle automatically travel by fuzzy control with the above modules. In the present research, the traveling is controlled by treating the position and configuration of objective in image as a fuzzy inferential variable. Based on the above method, the traveling simulation gave the following knowledge: even with the image information only from the vision system, the application of fuzzy control facilitates the traveling. If the objective is clearly known, the control is judged able to be made even from vague image which does not necessitate the exact locative information. 4 refs., 11 figs.

  2. An experimental study of reconstruction accuracy using a 12-Camera Tomo-PIV system

    NARCIS (Netherlands)

    Lynch, K.; Scarano, F.

    2013-01-01

    A tomographic PIV system composed of a large number of cameras is used to experimentally investigate the relation between image particle density, number of cameras and the reconstruction quality. The large number of cameras allows to determine an asymptotic behavior for the object reconstruction ove

  3. Tape measuring system using linear encoder and digital camera

    Science.gov (United States)

    Eom, Tae Bong; Jeong, Don Young; Kim, Myung Soon; Kim, Jae Wan; Kim, Jong Ahn

    2013-04-01

    We have designed and constructed the calibration system of line standards such as tape and rule for the secondary calibration laboratories. The system consists of the main body with linear stage and linear encoder, the optical microscope with digital camera, and the computer. The base of the system is a aluminum profile with 2.9 m length, 0.09 m height and 0.18 m width. The linear stage and the linear encoder are fixed on the aluminum profile. The micro-stage driven by micrometer is fixed on the carriage of the long linear stage, and the optical microscope with digital camera and the tablet PC are on the this stage. The linear encoder counts the moving distance of the linear stage with resolution of 1 μm and its counting value is transferred to the tablet PC. The image of the scale mark of the tape is captured by the CCD camera of optical microscope and transferred to the PC through USB interface. The computer automatically determines the center of the scale mark by image processing technique and at the same time reads the moving distance of the linear stage. As a result, the computer can calculate the interval between the scale marks of the tape. In order to achieve the high accuracy, the linear encoder should be calibrated using the laser interferometer or the rigid steel rule. This calibration data of the linear encoder is stored at the computer and the computer corrects the reading value of the linear encoder. In order to determine the center of the scale mark, we use three different algorithms. First, the image profile over specified threshold level is fitted in even order polynomial and the axis of the polynomial is used as the center of the line. Second, the left side and right side areas at the center of the image profile are calculated so that two areas are same. Third, the left and right edges of the image profile are determined at every intensity level of the image and the center of the graduation is calculated as an average of the centers of the left

  4. A Walkthrough Remote View System with an Omnidirectional Camera

    Directory of Open Access Journals (Sweden)

    Tatsuhiro Yonekura

    2012-10-01

    Full Text Available A remote view system is a system for the delivery and presentation of information such as video and audioto users in remote locations. Methods for the construction of virtual environments based on videoinformation have been applied to remote view technology. When a system that supports remote viewing isapplied to situations such as viewing scenery or engaging in collaborative work, it is useful to provideusers with a function that enables them to walkthrough a sense of immersion in a remote locationenvironment. Although walkthroughs are possible in remote view systems that use pre-prepared images toconstruct virtual environments, these systems are lacking in terms of real-time performance. In this study,we built a virtual walkthrough environment and created a system that allows multiple users view imagessent in real time from an omnidirectional camera installed in a remote location. In this system, whenpresenting an object in the environment on a real world image, the information of the object can beshown in a separate window to the walkthrough environment if necessary. By using an improved mapprojection method, the system produces less image distortion than the conventional projection method.We have confirmed the walkthrough capabilities of this system, evaluated the performance of theimproved map projection method, and subjected it to user trials.

  5. Galvanometer control system design of aerial camera motion compensation

    Science.gov (United States)

    Qiao, Mingrui; Cao, Jianzhong; Wang, Huawei; Guo, Yunzeng; Hu, Changchang; Tang, Hong; Niu, Yuefeng

    2015-10-01

    Aerial cameras exist the image motion on the flight. The image motion has seriously affected the image quality, making the image edge blurred and gray scale loss. According to the actual application situation, when high quality and high precision are required, the image motion compensation (IMC) should be adopted. This paper designs galvanometer control system of IMC. The voice coil motor as the actuator has a simple structure, fast dynamic response and high positioning accuracy. Double-loop feedback is also used. PI arithmetic and Hall sensors are used at the current feedback. Fuzzy-PID arithmetic and optical encoder are used at the speed feedback. Compared to conventional PID control arithmetic, the simulation results show that the control system has fast response and high control accuracy.

  6. Metrology camera system of prime focus spectrograph for Suburu telescope

    Science.gov (United States)

    Wang, Shiang-Yu; Chou, Richard C. Y.; Huang, Pin-Jie; Ling, Hung-Hsu; Karr, Jennifer; Chang, Yin-Chang; Hu, Yen-Sang; Hsu, Shu-Fu; Chen, Hsin-Yo; Gunn, James E.; Reiley, Dan J.; Tamura, Naoyuki; Takato, Naruhisa; Shimono, Atsushi

    2016-08-01

    The Prime Focus Spectrograph (PFS) is a new optical/near-infrared multi-fiber spectrograph designed for the prime focus of the 8.2m Subaru telescope. PFS will cover a 1.3 degree diameter field with 2394 fibers to complement the imaging capabilities of Hyper SuprimeCam. To retain high throughput, the final positioning accuracy between the fibers and observing targets of PFS is required to be less than 10 microns. The metrology camera system (MCS) serves as the optical encoder of the fiber motors for the configuring of fibers. MCS provides the fiber positions within a 5 microns error over the 45 cm focal plane. The information from MCS will be fed into the fiber positioner control system for the closed loop control. MCS will be located at the Cassegrain focus of Subaru telescope in order to cover the whole focal plane with one 50M pixel Canon CMOS camera. It is a 380mm Schmidt type telescope which generates a uniform spot size with a 10 micron FWHM across the field for reasonable sampling of the point spread function. Carbon fiber tubes are used to provide a stable structure over the operating conditions without focus adjustments. The CMOS sensor can be read in 0.8s to reduce the overhead for the fiber configuration. The positions of all fibers can be obtained within 0.5s after the readout of the frame. This enables the overall fiber configuration to be less than 2 minutes. MCS will be installed inside a standard Subaru Cassgrain Box. All components that generate heat are located inside a glycol cooled cabinet to reduce the possible image motion due to heat. The optics and camera for MCS have been delivered and tested. The mechanical parts and supporting structure are ready as of spring 2016. The integration of MCS will start in the summer of 2016. In this report, the performance of the MCS components, the alignment and testing procedure as well as the status of the PFS MCS will be presented.

  7. Practical assessment of veiling glare in camera lens system

    Directory of Open Access Journals (Sweden)

    Ivana Tomić

    2014-12-01

    Full Text Available Veiling glare can be defined as an unwanted or stray light in an optical system caused by internal reflections between elements of the camera lens. It leads to image fogging and degradation of both image density and contrast, diminishing its overall quality. Each lens is susceptible to veiling glare to some extent - sometimes it is negligible, but it most cases it leads to the visible defects in an image. Unlike the other flaws and errors, lens flare is not easy to correct. Hence, it is highly recommended to prevent it during the capturing phase, if possible. For some applications, it can also be useful to estimate the susceptibility to a lens glare i.e. the degree of a glare in the lens system. Few methods are usually used for this types of testing. Some of the methods are hard to implement and often do not lead to consistent results. In this paper, we assessed one relatively easy method for practical evaluation of veiling glare. Method contains three steps: creating an appropriate scene, capturing the target image and analyzing it. In order to evaluate its applicability, we tested four lenses for Nikon 700 digital camera. Lenses used were with the fixed focal length of 35 and 85 mm and differed by the coatings of their elements. Furthermore, we evaluated the influence of aperture on veiling glare value. It was shown that presented method is not applicable for testing the lenses with short focal length and that the new generation of lenses, equipped with Nano crystal coatings are less susceptible to veiling glare. Aperture did not affect veiling glare value significantly.

  8. Bundle Adjustment for Multi-Camera Systems with Points at Infinity

    Science.gov (United States)

    Schneider, J.; Schindler, F.; Läbe, T.; Förstner, W.

    2012-07-01

    We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or - like omnidirectional cameras - to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations - especially rotations - one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug 3 from Point Grey.

  9. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves......This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... on real-time motion planning of the camera. Moreover, the recasting of camera constraints into potential fields is visually more accessible to game designers and has the potential to be implemented as a plug-in to 3D level design and editing tools currently avail- able with games. Introduction...

  10. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  11. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  12. A 90GHz Bolometer Camera Detector System for the Green

    Science.gov (United States)

    Benford, Dominic J.; Allen, Christine A.; Buchanan, Ernest; Chen, Tina C.; Chervenak, James A.; Devlin, Mark J.; Dicker, Simon R.; Forgione, Joshua B.

    2004-01-01

    We describe a close-packed, two-dimensional imaging detector system for operation at 90GHz (3.3 mm) for the 100m Green Bank Telescope (GBT). This system will provide high sensitivity (less than 1mJy in 1s) rapid imaging (15'x15' to 150 micron Jy in 1 hr) at the world's largest steerable aperture. The heart of this camera is an 8x8 close-packed, Nyquist-sampled array of superconducting transition edge sensor (TES) bolometers. We have designed and are producing a functional superconducting bolometer array system using a monolithic planar architecture and high-speed multiplexed readout electronics. With an NEP of approximately 2 x 10(exp -17) W/square root of Hz, the TES bolometers will provide fast, linear, sensitive response for high performance imaging. The detectors are read out by an 8x8 time domain SQUID multiplexer. A digital/analog electronics system has been designed to enable read out by SQUID multiplexers. First light for this instrument on the GBT is expected within a year.

  13. The readout system for the ArTeMis camera

    Science.gov (United States)

    Doumayrou, E.; Lortholary, M.; Dumaye, L.; Hamon, G.

    2014-07-01

    During ArTeMiS observations at the APEX telescope (Chajnantor, Chile), 5760 bolometric pixels from 20 arrays at 300mK, corresponding to 3 submillimeter focal planes at 450μm, 350μm and 200μm, have to be read out simultaneously at 40Hz. The read out system, made of electronics and software, is the full chain from the cryostat to the telescope. The readout electronics consists of cryogenic buffers at 4K (NABU), based on CMOS technology, and of warm electronic acquisition systems called BOLERO. The bolometric signal given by each pixel has to be amplified, sampled, converted, time stamped and formatted in data packets by the BOLERO electronics. The time stamping is obtained by the decoding of an IRIG-B signal given by APEX and is key to ensure the synchronization of the data with the telescope. Specifically developed for ArTeMiS, BOLERO is an assembly of analogue and digital FPGA boards connected directly on the top of the cryostat. Two detectors arrays (18*16 pixels), one NABU and one BOLERO interconnected by ribbon cables constitute the unit of the electronic architecture of ArTeMiS. In total, the 20 detectors for the tree focal planes are read by 10 BOLEROs. The software is working on a Linux operating system, it runs on 2 back-end computers (called BEAR) which are small and robust PCs with solid state disks. They gather the 10 BOLEROs data fluxes, and reconstruct the focal planes images. When the telescope scans the sky, the acquisitions are triggered thanks to a specific network protocol. This interface with APEX enables to synchronize the acquisition with the observations on sky: the time stamped data packets are sent during the scans to the APEX software that builds the observation FITS files. A graphical user interface enables the setting of the camera and the real time display of the focal plane images, which is essential in laboratory and commissioning phases. The software is a set of C++, Labview and Python, the qualities of which are respectively used

  14. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    Science.gov (United States)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  15. Detection of Toxoplasma gondii in the reproductive system of male goats Detecção de Toxoplasma gondii no sistema reprodutor de caprinos machos

    Directory of Open Access Journals (Sweden)

    Luís Fernando Santana

    2010-09-01

    Full Text Available Male goats of mating age serologically negative for Toxoplasma gondii were divided into three groups: GI - controls (placebo (n = 2; GII - infected with 1 × 10(6 tachyzoites (RH strains (n = 2; and GIII - infected with 2 × 10(5 oocysts (P strains (n = 2. Clinical, hematology, parasite and serology tests and studies of parasites in the semen through bioassay and polymerase chain reaction (PCR, and in reproductive organs (bioassay were performed to assess toxoplasma infection. Serological titers peaked at 4096 in two animal groups infected with the protozoan. The bioassays allowed an early detection of protozoa in semen samples of tachyzoite-inoculated animals. T. gondii DNA was identified through PCR in the semen in five (Days 5, 7, 28, 49, and 70 and two (both at day 56 different days post-inoculation in GII and GIII animals, respectively. It was also possible to detect T. gondii DNA in reproductive organs (prostate pool, testicles, seminal vesicle and epididymis of goats inoculated with either tachyzoites or oocysts. The present study suggests the possibility of venereal transmission of T. gondii among goats and it should be further assessed.Caprinos machos, em idade reprodutiva, sorologicamente negativos para Toxoplasma gondii foram distribuídos em três grupos de animais: GI (n = 2 controle (placebo, GII (n = 2 - infectado com 1 × 10(6 taquizoítos (cepa RH e GIII (n = 2 infectado com 2 × 10(5 oocistos (cepa P. Exames clínicos, hematológicos, parasitêmicos, sorológicos, pesquisa no sêmen e em tecidos do sistema reprodutor, por meio da bioprova, e da Reação em Cadeia pela Polimerase (PCR, foram conduzidas para avaliar a infecção toxoplásmica. Os títulos sorológicos alcançaram valores máximos de 4096 nos dois grupos de animais infectados. Pela técnica da bioprova, foi possível revelar precocemente a presença do coccídio nas amostras seminais dos animais inoculados com taquizoítos. Pela PCR, foi possível identificar, no

  16. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  17. A camera space control system for an automated forklift

    Energy Technology Data Exchange (ETDEWEB)

    Miller, R.K.; Stewart, D.G.; Brockman, W.H. (Iowa State Univ., Ames, IA (United States)); Skaar, S.B. (Univ. of Notre Dame, IN (United States). Dept. of Aerospace and Mechanical Engineering)

    1994-10-01

    The authors present experimental results on a method of camera space control applied to a mobile cart with an on-board robot, operated as a forklift. The objective is to extend earlier results to the task of the precise and robust three-dimensional object placement. The method is illustrated with a box stacking task. Camera space control does not rely on producing absolute position measurements. All measurements, estimates and control criteria are done relative to camera images in units of pixels. The resulting ''camera space'' technique is found to be very robust, i.e., extremely accurate modeling and calibration are not needed in order to achieve a precise result.

  18. Development and implementation of a camera system for faster area reduction

    NARCIS (Netherlands)

    Jong, W. de; Schavemaker, J.G.M.; Breuers, M.G.J.; Baan, J.; Schleijpen, H.M.A.

    2004-01-01

    This paper describes the development and implementation of a low cost camera system that uses polarisation features of visible light for faster area reduction. The camera system will be mounted on a mechanical minefield area reduction asset, namely an AT mine roller of The HALO Trust. The automatic

  19. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  20. Metrology Camera System Using Two-Color Interferometry

    Science.gov (United States)

    Dubovitsky, Serge; Liebe, Carl Christian; Peters, Robert; Lay, Oliver

    2007-01-01

    A metrology system that contains no moving parts simultaneously measures the bearings and ranges of multiple reflective targets in its vicinity, enabling determination of the three-dimensional (3D) positions of the targets with submillimeter accuracy. The system combines a direction-measuring metrology camera and an interferometric range-finding subsystem. Because the system is based partly on a prior instrument denoted the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor and because of its 3D capability, the system is denoted the MSTAR3D. Developed for use in measuring the shape (for the purpose of compensating for distortion) of large structures like radar antennas, it can also be used to measure positions of multiple targets in the course of conventional terrestrial surveying. A diagram of the system is shown in the figure. One of the targets is a reference target having a known, constant distance with respect to the system. The system comprises a laser for generating local and target beams at a carrier frequency; a frequency shifting unit to introduce a frequency shift offset between the target and local beams; a pair of high-speed modulators that apply modulation to the carrier frequency in the local and target beams to produce a series of modulation sidebands, the highspeed modulators having modulation frequencies of FL and FM; a target beam launcher that illuminates the targets with the target beam; optics and a multipixel photodetector; a local beam launcher that launches the local beam towards the multi-pixel photodetector; a mirror for projecting to the optics a portion of the target beam reflected from the targets, the optics being configured to focus the portion of the target beam at the multi-pixel photodetector; and a signal-processing unit connected to the photodetector. The portion of the target beam reflected from the targets produces spots on the multi-pixel photodetector corresponding to the targets, respectively, and the signal

  1. Multi-Kinect v2 Camera Based Monitoring System for Radiotherapy Patient Safety.

    Science.gov (United States)

    Santhanam, Anand P; Min, Yugang; Kupelian, Patrick; Low, Daniel

    2016-01-01

    3D kinect camera systems are essential for real-time imaging of 3D treatment space that consists of both the patient anatomy as well as the treatment equipment setup. In this paper, we present the technical details of a 3D treatment room monitoring system that employs a scalable number of calibrated and coregistered Kinect v2 cameras. The monitoring system tracks radiation gantry and treatment couch positions, and tracks the patient and immobilization accessories. The number and positions of the cameras were selected to avoid line-of-sight issues and to adequately cover the treatment setup. The cameras were calibrated with a calibration error of 0.1 mm. Our tracking system evaluation show that both gantry and patient motion could be acquired at a rate of 30 frames per second. The transformations between the cameras yielded a 3D treatment space accuracy of < 2 mm error in a radiotherapy setup within 500mm around the isocenter.

  2. BroCam: a versatile PC-based CCD camera system

    Science.gov (United States)

    Klougart, Jens

    1995-03-01

    At the Copenhagen University, we have developed a compact CCD camera system for single and mosaic CCDs. The camera control and data acquisition is performed by a 486 type PC via a frame buffer located in one ISA-bus slot, communicating to the camera electronics on two optical fibers. The PC can run as well special purpose DOS programs, as in a more general mode under LINUX, a UNIX similar operating system. In the latter mode, standard software packages, such as SAOimage and Gnuplot, are utilized extensively thereby reducing the amount of camera specific software. At the same time the observer feels at ease with the system in an IRAF-like environment. Finally, the LINUX version enables the camera to be remotely controlled.

  3. TOWARDS THE INFLUENCE OF A CAR WINDSHIELD ON DEPTH CALCULATION WITH A STEREO CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    A. Hanel

    2016-06-01

    Full Text Available Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.

  4. A universal and flexible theodolite-camera system for making accurate measurements over large volumes

    Science.gov (United States)

    Zhang, Xiaohu; Zhu, Zhaokun; Yuan, Yun; Li, Lichun; Sun, Xiangyi; Yu, Qifeng; Ou, Jianliang

    2012-11-01

    Typically, optical measurement systems can achieve high accuracy over a limited volume, or cover a large volume with low accuracy. In this paper, we propose a universal way of integrating a camera with a theodolite to construct a theodolite-camera (TC) measurement system that can make measurements over a large volume with high accuracy. The TC inherits the advantages of high flexibility and precision from theodolite and camera, but it avoids the need to perform elaborate adjustments on the camera and theodolite. The TC provides a universal and flexible approach to the camera-on-theodolite system. We describe three types of TC based separately on: (i) a total station; (ii) a theodolite; and (iii) a general rotation frame. We also propose three corresponding calibration methods for the different TCs. Experiments have been conducted to verify the measuring accuracy of each of the three types of TC.

  5. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Science.gov (United States)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  6. A low-cost dual-camera imaging system for aerial applicators

    Science.gov (United States)

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  7. Single-port-access surgery with a novel magnet camera system.

    Science.gov (United States)

    Terry, Benjamin S; Mills, Zachary C; Schoen, Jonathan A; Rentschler, Mark E

    2012-04-01

    In this paper, we designed, built, and tested a novel single-port access laparoscopic surgery (SPA) specific camera system. This device (magnet camera) integrates a light source and video camera into a small, inexpensive, portable package that does not compete for space with the surgical tools during SPA. The device is inserted through a 26-mm incision in the umbilicus, followed by the SPA port, which is used to maintain an insufflation seal and support the insertion of additional tools. The camera, now in vivo, remains separate from the SPA port, thereby removing the need for a dedicated laparoscope, and, thus, allowing for an overall reduction in SPA port size or the use of a third tool through the insertion port regularly reserved for the traditional laparoscope. The SPA camera is mounted to the abdominal ceiling using one of the two methods: fixation to the SPA port through the use of a rigid ring and cantilever bar, or by an external magnetic handle. The purpose of the magnet camera system is to improve SPA by: 1) eliminating the laparoscope SPA channel; 2) increasing the field of view through enhanced camera system mobility; and 3) reducing interference between the camera system and the surgical tools at the port, both in vivo and ex vivo.

  8. DC drive system for cine/pulse cameras

    Science.gov (United States)

    Gerlach, R. H.; Sharpsteen, J. T.; Solheim, C. D.; Stoap, L. J.

    1977-01-01

    Camera-drive functions are separated mechanically into two groups which are driven by two separate dc brushless motors. First motor, a 90 deg stepper, drives rotating shutter; second electronically commutated motor drives claw and film transport. Shutter is made of one piece but has two openings for slow and fast exposures.

  9. Incremental Real-Time Bundle Adjustment for Multi-Camera Systems with Points at Infinity

    Science.gov (United States)

    Schneider, J.; Läbe, T.; Förstner, W.

    2013-08-01

    This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omnidirectional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment w.r.t. time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.

  10. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  11. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  12. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F., Westinghouse Hanford

    1996-07-16

    The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

  13. A Video Camera Road Sign System of the Early Warning from Collision with the Wild Animals

    Directory of Open Access Journals (Sweden)

    Matuska Slavomir

    2016-05-01

    Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.

  14. Avaliação sensorial da carne de cordeiros machos e fêmeas Texel × Corriedale terminados em diferentes sistemas Sensory evaluation of meat lambs from males and femeles Texel × Corriedale finished in different systems

    Directory of Open Access Journals (Sweden)

    Marlice Salete Bonacina

    2011-08-01

    Full Text Available O objetivo neste trabalho foi avaliar o efeito do sexo e de três sistemas de terminação nas características sensoriais da carne de cordeiros Texel × Corriedale e na aceitação da carne pelo consumidor. Foram utilizados 90 animais, 45 cordeiros machos não-castrados e 45 fêmeas mantidos em pastagem até o desmame (70 dias e terminados em três sistemas de produção: pastagem; pastagem ao pé da mãe; e pastagem com suplementação (casca de soja em nível correspondente a 1% do peso vivo dos cordeiros. Após o abate, as carcaças foram armazenadas em câmara fria, com ar forçado, a 1ºC, durante 24 horas, para retirada do músculo longissimus dorsi, que foi congelado a -18ºC para análise sensorial. A caracterização sensorial da carne foi realizada por meio da análise descritiva quantitativa: 22 termos descritivos foram desenvolvidos por uma equipe de julgadores selecionados, que geraram também a definição de cada termo e as amostras-referência. Foi realizado um teste de aceitação utilizando escala hedônica híbrida de nove pontos. A carne dos machos e dos animais terminados em pastagem ao pé da mãe caracterizou-se pelo odor e sabor residual mais suaves de carne ovina e gordura, menor maciez e maior mastigabilidade em comparação à das fêmeas e dos animais terminados nos demais sistemas. As carnes dos cordeiros terminados nos sistemas de pastagem e de pastagem com suplementação são semelhantes quanto aos aspectos sensoriais. A carne é igualmente aceita pelos consumidores, independentemente do sexo e do sistema de terminação, apresentando boa aceitação.The objective of this work was to evaluate the effect of sex and of three finishing systems on sensory traits of Texel × Corriedale lamb meat an on the consumer acceptance of the meat. It was used 90 animals, 45 non-castrated male lambs and 45 females kept on pasture until weaning (70 days of age and finished in three production systems: pasture, pasture with mother

  15. Detecting method of subjects' 3D positions and experimental advanced camera control system

    Science.gov (United States)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  16. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  17. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System.

    Science.gov (United States)

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-04-11

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  18. Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer

    Science.gov (United States)

    Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

    2014-09-01

    The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

  19. Genotipagem de polimorfismos associados com sistemas de macho-esterilidade em acessos de cebola adaptados ao Brasil Genotyping of polymorphisms associated with male-sterility systems in onion accessions adapted for cultivation in Brazil

    Directory of Open Access Journals (Sweden)

    Carlos Francisco Ragassi

    2012-09-01

    Full Text Available A produção em escala comercial de sementes híbridas de cebola (Allium cepa tem sido conduzida com o emprego de dois sistemas de macho-esterilidade do tipo genética-citoplasmática (CMS-S e CMS-T em associação ao citoplasma normal (macho-fértil. No entanto, a análise molecular desses diferentes tipos citoplasmáticos ainda não está disponível para um grande número de acessos de cebola adaptados para cultivo em regiões tropicais. Além de adaptação às condições edafoclimáticas do Brasil, muitos desses acessos apresentam tolerância a doenças, sendo de potencial valor como genitores de híbridos. O presente trabalho visou identificar os tipos citoplasmáticos de acessos de cebola de diferentes grupos morfoagronômicos de interesse para o melhoramento genético no Brasil, usando a reação da polimerase em cadeia (PCR com 'primers' específicos para regiões polimórficas do genoma mitocondrial de cebola. Foi observada, nos 66 acessos amostrados, a presença dos três principais tipos de citoplasma descritos para cebola (S, N e T. Foi constatada maior frequência do citoplasma S (56% seguido do citoplasma T (25,8%. Em 18,2% das amostras, foi encontrado exclusivamente o citoplasma N. Essa caracterização pode ser útil para guiar a escolha de materiais genéticos dentro dos programas de melhoramento com objetivo de desenvolver cultivares híbridas adaptadas às condições tropicais.The synthesis of onion (Allium cepa hybrids relies upon the use of two genetic-cytoplasmic male-sterility systems, CMS-S and CMS-T, in association to the normal male-fertile (N cytoplasm. However, the molecular phenotyping of male-sterility-inducing and normal cytoplasms of many onion accessions adapted for cultivation under tropical conditions is not available. Some of these accessions were reported as presenting tolerance to diseases and adaptation to tropical and subtropical regions. Therefore, these accessions are potential sources of parental

  20. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  1. Systems and methods for maintaining multiple objects within a camera field-of-view

    Science.gov (United States)

    Gans, Nicholas R.; Dixon, Warren

    2016-03-15

    In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.

  2. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  3. Implementation of spatial touch system using time-of-flight camera

    Institute of Scientific and Technical Information of China (English)

    AHN Yang-Keun; PARK Young-Choong; CHOI Kwang-Soon; PARK Woo-Choo; SEO Hae-Moon; JUNG Kwang-Mo

    2009-01-01

    Recently developed time-of-flight principle based depth-sensing video camera technologies provide precise per-pixel range data in addition to color video. Such cameras will find application in robotics and vision-based human computer interaction scenarios such as games and gesture input systems. Time-of-flight principle range cameras are becoming more and more available. They promise to make the 3D reconstruction of scenes easier, avoiding the practical issues resulting from 3D imaging techniques based on triangulation or disparity estimation. A spatial touch system was presented which uses a depth-sensing camera to touch spatial objects and details on its implementation, and how this technology will enable new spatial interactions was speculated.

  4. Report on the Radiation Effects Testing of the Infrared and Optical Transition Radiation Camera Systems

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-04-20

    Presented in this report are the results tests performed at Argonne National Lab in collaboration with Los Alamos National Lab to assess the reliability of the critical 99Mo production facility beam monitoring diagnostics. The main components of the beam monitoring systems are two cameras that will be exposed to radiation during accelerator operation. The purpose of this test is to assess the reliability of the cameras and related optical components when exposed to operational radiation levels. Both X-ray and neutron radiation could potentially damage camera electronics as well as the optical components such as lenses and windows. This report covers results of the testing of component reliability when exposed to X-ray radiation. With the information from this study we provide recommendations for implementing protective measures for the camera systems in order to minimize the occurrence of radiation-induced failure within a ten month production run cycle.

  5. Auto-Guiding System for CQUEAN (Camera for QUasars in EArly uNiverse)

    CERN Document Server

    Kim, Eunbin; Jeong, Hyenju; Kim, Jinyoung; Kuehne, John; Kim, Dong Han; Kim, Han Geun; Odons, Peter S; Chang, Seunghyuk; Im, Myungshin; Pak, Soojong

    2011-01-01

    To perform imaging observation of optically red objects such as high redshift quasars and brown dwarfs, the Center for the Exploration of the Origin of the Universe (CEOU) recently developed an optical CCD camera, Camera for QUasars in EArly uNiverse(CQUEAN), which is sensitive at 0.7-1.1 um. To enable observations with long exposures, we developed an auto-guiding system for CQUEAN. This system consist of an off-axis mirror, a baffle, a CCD camera, a motor and a differential decelerator. To increase the number of available guiding stars, we designed a rotating mechanism for the off-axis guiding camera. The guiding field can be scammed along the 10 acrmin ring offset from the optical axis of the telescope. Combined with the auto-guiding software of the McDonald Observatory, we confirmed that a stable image can be obtained with an exposure time as long as 1200 seconds.

  6. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  7. Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Kovácsik, Ákos, E-mail: kovacsik.akos@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Lampert, Máté, E-mail: lampert.mate@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Un Nam, Yong, E-mail: yunam@nfri.re.kr [NFRI, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon 305-806 (Korea, Republic of); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2015-01-11

    A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

  8. Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak

    Science.gov (United States)

    Náfrádi, Gábor; Kovácsik, Ákos; Pór, Gábor; Lampert, Máté; Un Nam, Yong; Zoletnik, Sándor

    2015-01-01

    A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

  9. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Science.gov (United States)

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  10. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  11. ROSA: a high cadence, synchronized multi-camera solar imaging system

    CERN Document Server

    Jess, D B; Christian, D J; Keenan, F P; Ryans, R S I; Crockett, P J

    2009-01-01

    Rapid Oscillations in the Solar Atmosphere (ROSA) is a synchronized, six camera high cadence solar imaging instrument developed by Queen's University Belfast. The system is available on the Dunn Solar Telescope at the National Solar Observatory in Sunspot, New Mexico, USA as a common-user instrument. Consisting of six 1k x 1k Peltier-cooled frame-transfer CCD cameras with very low noise (0.02-15 e/s/pixel), each ROSA camera is capable of full-chip readout speeds in excess of 30 Hz, or 200 Hz when the CCD is windowed. Combining multiple cameras and fast readout rates, ROSA will accumulate approximately 12 TB of data per 8 hours observing. Following successful commissioning during August 2008, ROSA will allow multi-wavelength studies of the solar atmosphere at high temporal resolution.

  12. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC; Little, William A.; /MMR-Technologies, Mountain View, CA; Powers, Jacob R; Schindler, Rafe H.; /SLAC; Spektor, Sam; /MMR-Technologies, Mountain View, CA

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results from a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)

  13. A novel IR polarization imaging system designed by a four-camera array

    Science.gov (United States)

    Liu, Fei; Shao, Xiaopeng; Han, Pingli

    2014-05-01

    A novel IR polarization staring imaging system employing a four-camera-array is designed for target detection and recognition, especially man-made targets hidden in complex battle field. The design bases on the existence of the difference in infrared radiation's polarization characteristics, which is particularly remarkable between artificial objects and the natural environment. The system designed employs four cameras simultaneously to capture the00 polarization difference to replace the commonly used systems engaging only one camera. Since both types of systems have to obtain intensity images in four different directions (I0 , I45 , I90 , I-45 ), the four-camera design allows better real-time capability and lower error without the mechanical rotating parts which is essential to one-camera systems. Information extraction and detailed analysis demonstrate that the caught polarization images include valuable polarization information which can effectively increase the images' contrast and make it easier to segment the target even the hidden target from various scenes.

  14. Triple-head gamma camera PET: system overview and performance characteristics.

    Science.gov (United States)

    Grosev, D; Loncarić, S; Vandenberghe, S; Dodig, D

    2002-08-01

    Positron emission tomography (PET) is currently performed using either a dedicated PET scanner or scintillation gamma camera equipped with electronic circuitry for coincidence detection of 511 keV annihilation quanta (gamma camera PET system). Although the resolution limits of these two instruments are comparable, the sensitivity and count rate performance of the gamma camera PET system are several times lower than that of the PET scanner. Most gamma camera PET systems are manufactured as dual-detector systems capable of performing dual-head coincidence imaging. One possible step towards the improvement of the sensitivity of the gamma camera PET system is to add another detector head. This work investigates the characteristics of one such triple-head gamma camera PET system capable of performing triple-head coincidence imaging. The following performance characteristics of the system were assessed: spatial resolution, sensitivity, count rate performance. The spatial resolution, expressed as the full width at half-maximum (FWHM), at 1 cm radius is 5.9 mm; at 10 cm radius, the transverse radial resolution is 5.3 mm, whilst the transverse tangential and axial resolutions are 8.9 mm and 13.3 mm, respectively. The sensitivity for a standard cylindrical phantom is 255 counts.s(-1).MBq*(-1)), using a 30% width photopeak energy window. An increase of 35% in the PET sensitivity is achievable by opening an additional 30% width energy window in the Compton region. The count rate in coincidence mode, at the upper limit of the systems optimal performance, is 45 kc.s(-1) (kc=kilocounts) using the photopeak energy window only, and increases to 60 kc.s(-1) using the photopeak + Compton windows. Sensitivity results are compared with published data for a similar dual-head detector system.

  15. Whole-field thickness strain measurement using multiple camera digital image correlation system

    Science.gov (United States)

    Li, Junrui; Xie, Xin; Yang, Guobiao; Zhang, Boyang; Siebert, Thorsten; Yang, Lianxiang.

    2017-03-01

    Three Dimensional digital image correlation(3D-DIC) has been widely used by industry, especially for strain measurement. The traditional 3D-DIC system can accurately obtain the whole-field 3D deformation. However, the conventional 3D-DIC system can only acquire the displacement field on a single surface, thus lacking information in the depth direction. Therefore, the strain in the thickness direction cannot be measured. In recent years, multiple camera DIC (multi-camera DIC) systems have become a new research topic, which provides much more measurement possibility compared to the conventional 3D-DIC system. In this paper, a multi-camera DIC system used to measure the whole-field thickness strain is introduced in detail. Four cameras are used in the system. two of them are placed at the front side of the object, and the other two cameras are placed at the back side. Each pair of cameras constitutes a sub stereo-vision system and measures the whole-field 3D deformation on one side of the object. A special calibration plate is used to calibrate the system, and the information from these two subsystems is linked by the calibration result. Whole-field thickness strain can be measured using the information obtained from both sides of the object. Additionally, the major and minor strain on the object surface are obtained simultaneously, and a whole-field quasi 3D strain history is acquired. The theory derivation for the system, experimental process, and application of determining the thinning strain limit based on the obtained whole-field thickness strain history are introduced in detail.

  16. Calibration of the Multi-camera Registration System for Visual Navigation Benchmarking

    Directory of Open Access Journals (Sweden)

    Adam Schmidt

    2014-06-01

    Full Text Available This paper presents the complete calibration procedure of a multi-camera system for mobile robot motion registration. Optimization-based, purely visual methods for the estimation of the relative poses of the motion registration system cameras, as well as the relative poses of the cameras and markers placed on the mobile robot were proposed. The introduced methods were applied to the calibration of the system and the quality of the obtained results was evaluated. The obtained results compare favourably with the state of the art solutions, allowing the use of the considered motion registration system for the accurate reconstruction of the mobile robot trajectory and to register new datasets suitable for the benchmarking of indoor, visual-based navigation algorithms.

  17. Design of comprehensive general maintenance service system of aerial reconnaissance camera

    Directory of Open Access Journals (Sweden)

    Li Xu

    2016-01-01

    Full Text Available Aiming at the problem of lack of security equipment for airborne reconnaissance camera and universal difference between internal and external field and model, the design scheme of comprehensive universal system based on PC-104 bus architecture and ARM wireless test module is proposed is proposed using the ATE design. The scheme uses the "embedded" technology to design the system, which meets the requirements of the system. By using the technique of classified switching, the hardware resources are reasonably extended, and the general protection of the various types of aerial reconnaissance cameras is realized. Using the concept of “wireless test”, the test interface is extended to realize the comprehensive protection of the aerial reconnaissance camera and the field. The application proves that the security system works stably, has good generality, practicability, and has broad application prospect.

  18. Edge turbulence measurement in Heliotron J using a combination of hybrid probe system and fast cameras

    Energy Technology Data Exchange (ETDEWEB)

    Nishino, N., E-mail: nishino@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima (Japan); Zang, L. [Kyoto University, Gokasho, Uji, Kyoto (Japan); Takeuchi, M. [JAEA, Naka, Ibaraki (Japan); Mizuuchi, T.; Ohshima, S. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Kasajima, K.; Sha, M. [Kyoto University, Gokasho, Uji, Kyoto (Japan); Mukai, K. [NIFS, Toki, Gifu (Japan); Lee, H.Y. [Kyoto University, Gokasho, Uji, Kyoto (Japan); Nagasaki, K.; Okada, H.; Minami, T.; Kobayashi, S.; Yamamoto, S.; Konoshima, S.; Nakamura, Y.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)

    2013-07-15

    The hybrid probe system (a combination of Langmuir probes and magnetic probes), fast camera and gas puffing system were installed at the same toroidal section to study edge plasma turbulence/fluctuation in Heliotron J, especially blob (intermittent filament). Fast camera views the location of the probe head, so that the probe system yields the time evolution of the turbulence/fluctuation while the camera images the spatial profile. Gas puff at the same toroidal section was used to control the plasma density and simultaneous gas puff imaging technique. Using this combined system the filamentary structure associated with magnetic fluctuation was found in Heliotron J at the first time. The other kind of fluctuation was also observed at another experiment. This combination measurement enables us to distinguish MHD activity and electro-static activity.

  19. Development of an XYZ Digital Camera with Embedded Color Calibration System for Accurate Color Acquisition

    Science.gov (United States)

    Kretkowski, Maciej; Jablonski, Ryszard; Shimodaira, Yoshifumi

    Acquisition of accurate colors is important in the modern era of widespread exchange of electronic multimedia. The variety of device-dependent color spaces causes troubles with accurate color reproduction. In this paper we present the outlines of accomplished digital camera system with device-independent output formed from tristimulus XYZ values. The outstanding accuracy and fidelity of acquired color is achieved in our system by employing an embedded color calibration system based on emissive device generating reference calibration colors with user-defined spectral distribution and chromaticity coordinates. The system was tested by calibrating the camera using 24 reference colors spectrally reproduced from 24 color patches of the Macbeth Chart. The average color difference (CIEDE2000) has been found to be ΔE =0.83, which is an outstanding result compared to commercially available digital cameras.

  20. High performance CCD camera system for digitalisation of 2D DIGE gels.

    Science.gov (United States)

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection.

  1. Implementation of a Real-time JPEG2000 System Using DSPs for 2 Digital Cameras

    Institute of Scientific and Technical Information of China (English)

    何得平

    2006-01-01

    This paper presents techniques and approaches capable of achieving a real-time JPEG2000compressing system using DSP chips. We propose a three-DSP real-time parallel processing system usingefficient memory management for discrete wavelet transform (DWT) and parallel-pass architecture forembedded block coding with optimized truncation (EBCOT). This system performs compression of 1392×1040pixels monochrome images with the speed of 10 fps/camera of 2 digital still cameras and is proven to be apractical and efficient DSP solution.

  2. Development of the radial neutron camera system for the HL-2A tokamak.

    Science.gov (United States)

    Zhang, Y P; Yang, J W; Liu, Yi; Fan, T S; Luo, X B; Yuan, G L; Zhang, P F; Xie, X F; Song, X Y; Chen, W; Ji, X Q; Li, X; Du, T F; Ge, L J; Fu, B Z; Isobe, M; Song, X M; Shi, Z B; Yang, Q W; Duan, X R

    2016-06-01

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasma have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard (235)U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.

  3. Camera simulation engine enables efficient system optimization for super-resolution imaging

    Science.gov (United States)

    Fullerton, Stephanie; Bennett, Keith; Toda, Eiji; Takahashi, Teruo

    2012-02-01

    Quantitative fluorescent imaging requires optimization of the complete optical system, from the sample to the detector. Such considerations are especially true for precision localization microscopy such as PALM and (d)STORM where the precision of the result is limited by the noise in both the optical and detection systems. Here, we present a Camera Simulation Engine (CSE) that allows comparison of imaging results from CCD, CMOS and EM-CCD cameras under various sample conditions and can accurately validate the quality of precision localization algorithms and camera performance. To achieve these results, the CSE incorporates the following parameters: 1) Sample conditions including optical intensity, wavelength, optical signal shot noise, and optical background shot noise; 2) Camera specifications including QE, pixel size, dark current, read noise, EM-CCD excess noise; 3) Camera operating conditions such as exposure, binning and gain. A key feature of the CSE is that, from a single image (either real or simulated "ideal") we generate a stack of statistically realistic images. We have used the CSE to validate experimental data showing that certain current scientific CMOS technology outperforms EM-CCD in most super-resolution scenarios. Our results support using the CSE to efficiently and methodically select cameras for quantitative imaging applications. Furthermore, the CSE can be used to robustly compare and evaluate new algorithms for data analysis and image reconstruction. These uses of the CSE are particularly relevant to super-resolution precision localization microscopy and provide a faster, simpler and more cost effective means of system optimization, especially camera selection.

  4. Method used to test the imaging consistency of binocular camera's left-right optical system

    Science.gov (United States)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  5. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  6. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2011-01-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a Fourier Transform Infrared (FTIR Spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10 arc s for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disk center even in case of variable intensity distributions across the source due to cirrus or haze.

  7. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2010-11-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a FTIR-spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10" for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disc centre even in case of variable intensity distributions across the source due to cirrus or haze.

  8. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  9. Analysis of the technical biases of meteor video cameras used in the CILBO system

    Science.gov (United States)

    Albin, Thomas; Koschny, Detlef; Molau, Sirko; Srama, Ralf; Poppe, Björn

    2017-02-01

    In this paper, we analyse the technical biases of two intensified video cameras, ICC7 and ICC9, of the double-station meteor camera system CILBO (Canary Island Long-Baseline Observatory). This is done to thoroughly understand the effects of the camera systems on the scientific data analysis. We expect a number of errors or biases that come from the system: instrumental errors, algorithmic errors and statistical errors. We analyse different observational properties, in particular the detected meteor magnitudes, apparent velocities, estimated goodness-of-fit of the astrometric measurements with respect to a great circle and the distortion of the camera. We find that, due to a loss of sensitivity towards the edges, the cameras detect only about 55 % of the meteors it could detect if it had a constant sensitivity. This detection efficiency is a function of the apparent meteor velocity. We analyse the optical distortion of the system and the goodness-of-fit of individual meteor position measurements relative to a fitted great circle. The astrometric error is dominated by uncertainties in the measurement of the meteor attributed to blooming, distortion of the meteor image and the development of a wake for some meteors. The distortion of the video images can be neglected. We compare the results of the two identical camera systems and find systematic differences. For example, the peak magnitude distribution for ICC9 is shifted by about 0.2-0.4 mag towards fainter magnitudes. This can be explained by the different pointing directions of the cameras. Since both cameras monitor the same volume in the atmosphere roughly between the two islands of Tenerife and La Palma, one camera (ICC7) points towards the west, the other one (ICC9) to the east. In particular, in the morning hours the apex source is close to the field-of-view of ICC9. Thus, these meteors appear slower, increasing the dwell time on a pixel. This is favourable for the detection of a meteor of a given magnitude.

  10. Novel Intraoperative Near-Infrared Fluorescence Camera System for Optical Image-Guided Cancer Surgery

    Directory of Open Access Journals (Sweden)

    J. Sven D. Mieog

    2010-07-01

    Full Text Available Current methods of intraoperative tumor margin detection using palpation and visual inspection frequently result in incomplete resections, which is an important problem in surgical oncology. Therefore, real-time visualization of cancer cells is needed to increase the number of patients with a complete tumor resection. For this purpose, near-infrared fluorescence (NIRF imaging is a promising technique. Here we describe a novel, handheld, intraoperative NIRF camera system equipped with a 690 nm laser; we validated its utility in detecting and guiding resection of cancer tissues in two syngeneic rat models. The camera system was calibrated using an activated cathepsin-sensing probe (ProSense, VisEn Medical, Woburn, MA. Fluorescence intensity was strongly correlated with increased activated-probe concentration (R2 = .997. During the intraoperative experiments, a camera exposure time of 10 ms was used, which provided the optimal tumor to background ratio. Primary mammary tumors (n = 20 tumors were successfully resected under direct fluorescence guidance. The tumor to background ratio was 2.34 using ProSense680 at 10 ms camera exposure time. The background fluorescence of abdominal organs, in particular liver and kidney, was high, thereby limiting the ability to detect peritoneal metastases with cathepsin-sensing probes in these regions. In conclusion, we demonstrated the technical performance of this new camera system and its intraoperative utility in guiding resection of tumors.

  11. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  12. Lung function assessment using Xe-133 dynamic SPECT in dual-camera system

    Energy Technology Data Exchange (ETDEWEB)

    Sakaji, Katsuyuki; Akiyama, Masayuki; Nakazawa, Yasuo [Showa Univ., Tokyo (Japan). Hospital; Umeda, Hirotaka; Takenaka, Haruki; Shinozuka, Akira

    2001-09-01

    The purpose of this study was to estimate lung regional function using Xe-133 dynamic SPECT. SPECT equipment with a dual camera was used. Fourteen rotation acquisitions were obtained beginning immediately after Xe-133 gas inhalation. The time activity curve of each pixel was obtained, and T{sub 1/2} of the washout phase was calculated and mapped. Residual radioactivity was evaluated. Adequate images could be obtained at 30 seconds per rotation even with the dual-camera system. Mapping of T{sub 1/2} allowed temporal changes on one image. Three-dimensional evaluation could be made on a SPECT system using our method. (author)

  13. The camera of the fifth H.E.S.S. telescope. Part I: System description

    Energy Technology Data Exchange (ETDEWEB)

    Bolmont, J., E-mail: bolmont@in2p3.fr [LPNHE, Université Pierre et Marie Curie Paris 6, Université Denis Diderot Paris 7, CNRS/IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 5 (France); Corona, P.; Gauron, P.; Ghislain, P.; Goffin, C.; Guevara Riveros, L.; Huppert, J.-F.; Martineau-Huynh, O.; Nayman, P.; Parraud, J.-M.; Tavernet, J.-P.; Toussenel, F.; Vincent, D.; Vincent, P. [LPNHE, Université Pierre et Marie Curie Paris 6, Université Denis Diderot Paris 7, CNRS/IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 5 (France); Bertoli, W.; Espigat, P.; Punch, M. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, F-75205 Paris Cedex 13 (France); Besin, D.; Delagnes, E.; Glicenstein, J.-F. [CEA Saclay, DSM/IRFU, F-91191 Gif-Sur-Yvette Cedex (France); and others

    2014-10-11

    In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m{sup 2} reflector with a highly pixelized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.

  14. The camera of the fifth H.E.S.S. telescope. Part I: System description

    CERN Document Server

    Bolmont, J; Gauron, P; Ghislain, P; Goffin, C; Riveros, L Guevara; Huppert, J -F; Martineau-Huynh, O; Nayman, P; Parraud, J -M; Tavernet, J -P; Toussenel, F; Vincent, D; Vincent, P; Bertoli, W; Espigat, P; Punch, M; Besin, D; Delagnes, E; Glicenstein, J -F; Moudden, Y; Venault, P; Zaghia, H; Brunetti, L; Dubois, J-M; Fiasson, A; Geffroy, N; Monteiro, I Gomes; Journet, L; Krayzel, F; Lamanna, G; Flour, T Le; Lees, S; Lieunard, B; Maurin, G; Mugnier, P; Panazol, J-L; Prast, J; Chounet, L -M; Edy, E; Fontaine, G; Giebels, B; Hormigos, S; Khélifi, B; Manigot, P; Maritaz, P; de Naurois, M; Compin, M; Feinstein, F; Fernandez, D; Mehault, J; Rivoire, S; Royer, S; Sanguillon, M; Vasileiadis, G

    2013-01-01

    In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m$^2$ reflector with a highly pixellized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.

  15. Robust and accurate visual echo cancellation in a full-duplex projector-camera system.

    Science.gov (United States)

    Liao, Miao; Yang, Ruigang; Zhang, Zhengyou

    2008-10-01

    In this paper we study the problem of "visual echo" in a full-duplex projector-camera system for telecollaboration applications. Visual echo is defined as the appearance of projected contents observed by the camera. It can potentially saturate the projected contents, similar to audio echo in telephone conversation. Our approach to visual echo cancellation includes an offline calibration procedure that records the geometric and photometric transfer between the projector and the camera in a look-up table. During run-time, projected contents in the captured video are identified using the calibration information and suppressed, therefore achieving the goal of cancelling visual echo. Our approach can accurately handle full-color images under arbitrary reflectance of display surfaces and photometric response of the projector or camera. It is robust to geometric registration errors and quantization effects and is therefore particularly effective for high-frequency contents such as texts and hand drawings. We demonstrate the effectiveness of our approach with a variety of real images in a full-duplex projector-camera system.

  16. Development of a high resolution gamma camera system using finely grooved GAGG scintillator

    Science.gov (United States)

    Yamamoto, Seiichi; Kataoka, Jun; Oshima, Tsubasa; Ogata, Yoshimune; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Hatazawa, Jun

    2016-06-01

    High resolution gamma cameras require small pixel scintillator blocks with high light output. However, manufacturing a small pixel scintillator block is difficult when the pixel size becomes small. To solve this limitation, we developed a high resolution gamma camera system using a finely grooved Ce-doped Gd3Al2Ga3O12 (GAGG) plate. Our gamma camera's detector consists of a 1-mm-thick finely grooved GAGG plate that is optically coupled to a 1-in. position sensitive photomultiplier tube (PSPMT). The grooved GAGG plate has 0.2×0.2 mm pixels with 0.05-mm wide slits (between the pixels) that were manufactured using a dicing saw. We used a Hamamatsu PSPMT with a 1-in. square high quantum efficiency (HQE) PSPMT (R8900-100-C12). The energy resolution for the Co-57 gamma photons (122 keV) was 18.5% FWHM. The intrinsic spatial resolution was estimated to be 0.7-mm FWHM. With a 0.5-mm diameter pinhole collimator mounted to its front, we achieved a high resolution, small field-of-view gamma camera. The system spatial resolution for the Co-57 gamma photons was 1.0-mm FWHM, and the sensitivity was 0.0025%, 10 mm from the collimator surface. The Tc-99m HMDP administered mouse images showed the fine structures of the mouse body's parts. Our developed high resolution small pixel GAGG gamma camera is promising for such small animal imaging.

  17. Data acquisition system based on the Nios II for a CCD camera

    Science.gov (United States)

    Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

    2006-06-01

    The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

  18. EVALUATION OF A METRIC CAMERA SYSTEM TAILORED FOR HIGH PRECISION UAV APPLICATIONS

    Directory of Open Access Journals (Sweden)

    T. Kraft

    2016-06-01

    Full Text Available In this paper we present the further evaluation of DLR’s modular airborne camera system MACS-Micro for small unmanned aerial vehicle (UAV. The main focus is on standardized calibration procedures and on photogrammetric workflows. The current prototype consists of an industrial grade frame imaging camera with 12 megapixel resolutions and a compact GNSS/IMU solution which are operated by an embedded computing unit (CPU. The camera was calibrated once pre-flight and several times post-flight over a period of 5 month using a three dimensional test field. The verification of the radiometric quality of the acquired images has been done under controlled static conditions and kinematic conditions testing different demosaicing methods. The validation of MACS-Micro is done by comparing a traditional photogrammetric evaluation with the workflows of Agisoft Photoscan and Pix4D Mapper. The analyses are based on an aerial survey of an urban environment using precise ground control points and acquired GNSS observations. Aerial triangulations with different configuratrions of ground control points (GCP’s had been calculated, comparing the results of using a camera self-calibration and introducing fixed interior orientation parameters for Agisoft and Pix4D. The results are promising concerning the metric characteristics of the used camera and achieved accuracies in this test case. Further aspects have to be evaluated by further expanded test scenarios.

  19. Evaluation of a Metric Camera System Tailored for High Precision Uav Applications

    Science.gov (United States)

    Kraft, T.; Geßner, M.; Meißner, H.; Cramer, M.; Gerke, M.; Przybilla, H. J.

    2016-06-01

    In this paper we present the further evaluation of DLR's modular airborne camera system MACS-Micro for small unmanned aerial vehicle (UAV). The main focus is on standardized calibration procedures and on photogrammetric workflows. The current prototype consists of an industrial grade frame imaging camera with 12 megapixel resolutions and a compact GNSS/IMU solution which are operated by an embedded computing unit (CPU). The camera was calibrated once pre-flight and several times post-flight over a period of 5 month using a three dimensional test field. The verification of the radiometric quality of the acquired images has been done under controlled static conditions and kinematic conditions testing different demosaicing methods. The validation of MACS-Micro is done by comparing a traditional photogrammetric evaluation with the workflows of Agisoft Photoscan and Pix4D Mapper. The analyses are based on an aerial survey of an urban environment using precise ground control points and acquired GNSS observations. Aerial triangulations with different configuratrions of ground control points (GCP's) had been calculated, comparing the results of using a camera self-calibration and introducing fixed interior orientation parameters for Agisoft and Pix4D. The results are promising concerning the metric characteristics of the used camera and achieved accuracies in this test case. Further aspects have to be evaluated by further expanded test scenarios.

  20. A Novel Camera Calibration Algorithm as Part of an HCI System: Experimental Procedure and Results

    Directory of Open Access Journals (Sweden)

    Sauer Kristal

    2006-02-01

    Full Text Available Camera calibration is an initial step employed in many computer vision applications for the estimation of camera parameters. Along with images of an arbitrary scene, these parameters allow for inference of the scene's metric information. This is a primary reason for camera calibration's significance to computer vision. In this paper, we present a novel approach to solving the camera calibration problem. The method was developed as part of a Human Computer Interaction (HCI System for the NASA Virtual GloveBox (VGX Project. Our algorithm is based on the geometric properties of perspective projections and provides a closed form solution for the camera parameters. Its accuracy is evaluated in the context of the NASA VGX, and the results indicate that our algorithm achieves accuracy similar to other calibration methods which are characterized by greater complexity and computational cost. Because of its reliability and wide variety of potential applications, we are confident that our calibration algorithm will be of interest to many.

  1. Design of an Optical Character Recognition System for Camera-based Handheld Devices

    CERN Document Server

    Mollah, Ayatullah Faruk; Basu, Subhadip; Nasipuri, Mita

    2011-01-01

    This paper presents a complete Optical Character Recognition (OCR) system for camera captured image/graphics embedded textual documents for handheld devices. At first, text regions are extracted and skew corrected. Then, these regions are binarized and segmented into lines and characters. Characters are passed into the recognition module. Experimenting with a set of 100 business card images, captured by cell phone camera, we have achieved a maximum recognition accuracy of 92.74%. Compared to Tesseract, an open source desktop-based powerful OCR engine, present recognition accuracy is worth contributing. Moreover, the developed technique is computationally efficient and consumes low memory so as to be applicable on handheld devices.

  2. Volumetric Diffuse Optical Tomography for Small Animals Using a CCD-Camera-Based Imaging System

    Directory of Open Access Journals (Sweden)

    Zi-Jing Lin

    2012-01-01

    Full Text Available We report the feasibility of three-dimensional (3D volumetric diffuse optical tomography for small animal imaging by using a CCD-camera-based imaging system with a newly developed depth compensation algorithm (DCA. Our computer simulations and laboratory phantom studies have demonstrated that the combination of a CCD camera and DCA can significantly improve the accuracy in depth localization and lead to reconstruction of 3D volumetric images. This approach may present great interests for noninvasive 3D localization of an anomaly hidden in tissue, such as a tumor or a stroke lesion, for preclinical small animal models.

  3. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    Directory of Open Access Journals (Sweden)

    Cristina Losada

    2010-04-01

    Full Text Available This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space. The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  4. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    Science.gov (United States)

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  5. AXUV bolometer and Lyman-α camera systems on the TCV tokamak

    Science.gov (United States)

    Degeling, A. W.; Weisen, H.; Zabolotsky, A.; Duval, B. P.; Pitts, R. A.; Wischmeier, M.; Lavanchy, P.; Marmillod, Ph.; Pochon, G.

    2004-10-01

    A set of seven twin slit cameras, each containing two 20-element linear absolute extreme ultraviolet photodiode arrays, has been installed on the Tokamak à Configuration Variable. One array in each camera will operate as a bolometer and the second as a Lyman-alpha (Lα) emission monitor for estimating the recycled neutral flux. The camera configuration was optimized by simulations of tomographic reconstructions of the expected Lα emission. The diagnostic will provide spatial and temporal resolution (10 μs) of the radiated power and the Lα emission that is considerably higher than previously achieved. This optimism is justified by extensive experience with prototype systems, which include first measurements of Lα light from the divertor.

  6. A fast 3D reconstruction system with a low-cost camera accessory.

    Science.gov (United States)

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  7. Robust Sign Language Recognition System Using ToF Depth Cameras

    CERN Document Server

    Zahedi, Morteza

    2011-01-01

    Sign language recognition is a difficult task, yet required for many applications in real-time speed. Using RGB cameras for recognition of sign languages is not very successful in practical situations and accurate 3D imaging requires expensive and complex instruments. With introduction of Time-of-Flight (ToF) depth cameras in recent years, it has become easier to scan the environment for accurate, yet fast depth images of the objects without the need of any extra calibrating object. In this paper, a robust system for sign language recognition using ToF depth cameras is presented for converting the recorded signs to a standard and portable XML sign language named SiGML for easy transferring and converting to real-time 3D virtual characters animations. Feature extraction using moments and classification using nearest neighbor classifier are used to track hand gestures and significant result of 100% is achieved for the proposed approach.

  8. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  9. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System.

    Science.gov (United States)

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-06-25

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  10. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  11. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  12. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    NARCIS (Netherlands)

    Simi, M.; Tolou, N.; Valdastri, P.; Herder, J.L.; Menciassi, A.; Dario, P.

    2012-01-01

    A novel compliant Magnetic Levitation System (MLS) for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The

  13. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    NARCIS (Netherlands)

    Simi, M.; Tolou, N.; Valdastri, P.; Herder, J.L.; Menciassi, A.; Dario, P.

    2012-01-01

    A novel compliant Magnetic Levitation System (MLS) for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The

  14. System for Steam Leak Detection by using CCTV Camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young Chul; Lee, Min Soo; Choi, Hui Ju [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Son, Ki Sung; Jeon, Hyeong Seop [SAEAN.Co., Seoul (Korea, Republic of)

    2012-05-15

    There are many pipes in the secondary cooling systems of nuclear power plants and coal-fired power plants. In these pipes, high pressure fluids are moving with at high velocity, which can cause steam leakage due to pipe thinning. Steam leakage is one of the major issues for the structural fracture of pipes. Therefore, a method to inspect a large area of piping systems quickly and accurately is needed. Steam leakage is almost invisible, because the flow has very high velocity and pressure. Therefore, it is very difficult to detect a steam leakage. In this paper, we proposed the method for detecting steam leakage using image signal processing. Our basic idea comes from a heat shimmer, which shines with a soft light that looks as if it is being shaken slightly. To test the performance of this technique, experiments have been performed for a steam generator. Results show that the proposed technique is quite powerful for steam leak detection

  15. Driving micro-optical imaging systems towards miniature camera applications

    Science.gov (United States)

    Brückner, Andreas; Duparré, Jacques; Dannberg, Peter; Leitel, Robert; Bräuer, Andreas

    2010-05-01

    Up to now, multi channel imaging systems have been increasingly studied and approached from various directions in the academic domain due to their promising large field of view at small system thickness. However, specific drawbacks of each of the solutions prevented the diffusion into corresponding markets so far. Most severe problems are a low image resolution and a low sensitivity compared to a conventional single aperture lens besides the lack of a cost-efficient method of fabrication and assembly. We propose a microoptical approach to ultra-compact optics for real-time vision systems that are inspired by the compound eyes of insects. The demonstrated modules achieve a VGA resolution with 700x550 pixels within an optical package of 6.8mm x 5.2mm and a total track length of 1.4mm. The partial images that are separately recorded within different optical channels are stitched together to form a final image of the whole field of view by means of image processing. These software tools allow to correct the distortion of the individual partial images so that the final image is also free of distortion. The so-called electronic cluster eyes are realized by state-of-the-art microoptical fabrication techniques and offer a resolution and sensitivity potential that makes them suitable for consumer, machine vision and medical imaging applications.

  16. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    Science.gov (United States)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  17. Amistad entre machos adultos de monos aulladores negros y dorados

    Directory of Open Access Journals (Sweden)

    Kowalewski, Martín M.

    2007-01-01

    Full Text Available El éxito reproductivo de los machos depende fundamentalmente del número de hembras que puedan fertilizar. Por lo tanto, los machos deberían competir activamente por las fertilizaciones. Sin embargo, en muchas especies de primates no-humanos, los machos conviven pacíficamente en grupos sociales. En este estudio exploramos las relaciones afiliativas entre machos de Alouatta caraya en la Isla Brasilera (27º 20’ S-58º 40’ W en el Noreste argentino. Dos grupos multimacho fueron estudiados durante 5 días por mes durante el año 2004. Definimos amistad como las interacciones afiliativas en díadas incluyendo proximidad, tolerancia durante la alimentación, interacciones de acicalamiento, y soporte en coaliciones. Por ejemplo, encontramos que los índices de asociación entre individuos basados en distancias y acicalamiento, difieren significativamente de lo esperado por azar a lo largo del año y en diferentes contextos comportamentales (p<0.05. Esto sugiere la existencia de pares de machos que se asociaron con mayor frecuencia que lo esperado por azar en ambos grupos. La existencia de afinidad social entre machos emparentados y no emparentados nos presenta nuevas preguntas en el estudio de la evolución de la amistad en primates no-humanos. Este tipo de estudios permite modelar nuevas ideas acerca de la historia evolutiva de la amistad en humanos.

  18. A Kinect™ camera based navigation system for percutaneous abdominal puncture

    Science.gov (United States)

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable

  19. Single Camera 3-D Coordinate Measuring System Based on Optical Probe Imaging

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new vision coordinate measuring system——single camera 3-D coordinate measuring system based on optical probe imaging is presented. A new idea in vision coordinate measurement is proposed. A linear model is deduced which can distinguish six freedom degrees of optical probe to realize coordinate measurement of the object surface. The effects of some factors on the resolution of the system are analyzed. The simulating experiments have shown that the system model is available.

  20. Motionless active depth from defocus system using smart optics for camera autofocus applications

    Science.gov (United States)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  1. Single Line of Sight CMOS radiation tolerant camera system design overview

    Science.gov (United States)

    Carpenter, A. C.; Dayton, M.; Kimbrough, J.; Datte, P.; Macaraeg, C.; Funsten, B.; Gardner, P.; Kittle, D.; Charron, K.; Bell, P.; Celeste, J.; Sanchez, M.; Mitchell, B.; Claus, L.; Robertson, G.; Porter, J.; Sims, G.; Hilsabeck, T.

    2016-09-01

    This paper covers the preliminary design of a radiation tolerant nanosecond-gated multi-frame CMOS camera system for use in the NIF. Electrical component performance data from 14 MeV neutron and cobalt 60 radiation testing will be discussed. The recent development of nanosecond-gated multi-frame hybrid-CMOS (hCMOS) focal plane arrays by the Ultrafast X-ray Imaging (UXI) group at Sandia National Lab has generated a need for custom camera electronics to operate in the pulsed radiation environment of the NIF target chamber. Design requirements and performance data for the prototype camera system will be discussed. The design and testing approach for the radiation tolerant camera system will be covered along with the evaluation of commercial off the shelf (COTS) electronic component such as FPGAs, voltage regulators, ADCs, DACs, optical transceivers, and other electronic components. Performance changes from radiation exposure on select components will be discussed. Integration considerations for x-ray imaging diagnostics on the NIF will also be covered.

  2. Hybrid Compton camera/coded aperture imaging system

    Science.gov (United States)

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  3. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  4. Comparison of gamma (Anger) camera systems in terms of detective quantum efficiency using Monte Carlo simulation.

    Science.gov (United States)

    Eriksson, Ida; Starck, Sven-Åke; Båth, Magnus

    2014-04-01

    The aim of the present study was to perform an extensive evaluation of available gamma camera systems in terms of their detective quantum efficiency (DQE) and determine their dependency on relevant parameters such as collimator type, imaging depth, and energy window using the Monte Carlo technique. The modulation transfer function was determined from a simulated (99m)Tc point source and was combined with the system sensitivity and photon yield to obtain the DQE of the system. The simulations were performed for different imaging depths in a water phantom for 13 gamma camera systems from four manufacturers. Except at very low spatial frequencies, the highest DQE values were found with a lower energy window threshold of around 130 keV for all systems. The height and shape of the DQE curves were affected by the collimator design and the intrinsic properties of the gamma camera systems. High-sensitivity collimators gave the highest DQE at low spatial frequencies, whereas the high-resolution and ultrahigh-resolution collimators showed higher DQE values at higher frequencies. The intrinsic resolution of the system mainly affected the DQE curve at superficial depths. The results indicate that the manufacturers have succeeded differently in their attempts to design a system constituting an optimal compromise between sensitivity and spatial resolution.

  5. An ebCMOS camera system for marine bioluminescence observation: The LuSEApher prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dominjon, A., E-mail: a.dominjon@ipnl.in2p3.fr [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Ageron, M. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Billault, M.; Brunner, J. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Calabria, P. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Chabanat, E. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Chaize, D.; Doan, Q.T.; Guerin, C.; Houles, J.; Vagneron, L. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France)

    2012-12-11

    The ebCMOS camera, called LuSEApher, is a marine bioluminescence recorder device adapted to extreme low light level. This prototype is based on the skeleton of the LUSIPHER camera system originally developed for fluorescence imaging. It has been installed at 2500 m depth off the Mediterranean shore on the site of the ANTARES neutrino telescope. The LuSEApher camera is mounted on the Instrumented Interface Module connected to the ANTARES network for environmental science purposes (European Seas Observatory Network). The LuSEApher is a self-triggered photo detection system with photon counting ability. The presentation of the device is given and its performances such as the single photon reconstruction, noise performances and trigger strategy are presented. The first recorded movies of bioluminescence are analyzed. To our knowledge, those types of events have never been obtained with such a sensitivity and such a frame rate. We believe that this camera concept could open a new window on bioluminescence studies in the deep sea.

  6. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    Science.gov (United States)

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  7. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    Science.gov (United States)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-François; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-07-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  8. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    Science.gov (United States)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  9. VALIDATION OF A SINGLE CAMERA THREE-DIMENSIONAL MOTION TRACKING SYSTEM

    Science.gov (United States)

    Weinhandl, Joshua T.; Armstrong, Brian S. R.; Kusik, Todd P.; Barrows, Robb T.; O’Connor, Kristian M.

    2010-01-01

    The ability to analyze human movement is an essential tool of biomechanical analysis for both sport and clinical applications. Traditional 3D motion capture technology limits the feasibility of large scale data collections and therefore the ability to address clinical questions. Ideally, the measurement system/protocol should be non-invasive, mobile, generate nearly instantaneous feedback to the clinician and athlete, and be relatively inexpensive. The Retro-Grate Reflector (RGR) is a new technology that allows for three-dimensional motion capture using a single camera. Previous studies have shown that orientation and position information recorded by the RGR system has high measurement precision and is strongly correlated with a traditional multi-camera system across a series of static poses. The technology has since been refined to record moving pose information from multiple RGR targets at sampling rates adequate for assessment of athletic movements. The purpose of this study was to compare motion data for a standard athletic movement recorded simultaneously with the RGR and multi-camera (Motion Analysis Eagle) systems. Nine subjects performed three single-leg land-and-cut maneuvers. Thigh and shank three-dimensional kinematics were collected with the RGR and Eagle camera systems simultaneously at 100 Hz. Results showed a strong agreement between the two systems in all three planes, which demonstrates the ability of the RGR system to record moving pose information from multiple RGR targets at a sampling rate adequate for assessment of human movement and supports the ability to use the RGR technology as a valid 3D motion capture system. PMID:20207358

  10. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    Science.gov (United States)

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  11. Determination of the detective quantum efficiency of gamma camera systems: a Monte Carlo study.

    Science.gov (United States)

    Eriksson, Ida; Starck, Sven-Ake; Båth, Magnus

    2010-01-01

    The purpose of the present work was to investigate the validity of using the Monte Carlo technique for determining the detective quantum efficiency (DQE) of a gamma camera system and to use this technique in investigating the DQE behaviour of a gamma camera system and its dependency on a number of relevant parameters. The Monte Carlo-based software SIMIND, simulating a complete gamma camera system, was used in the present study. The modulation transfer function (MTF) of the system was determined from simulated images of a point source of (99m)Tc, positioned at different depths in a water phantom. Simulations were performed using different collimators and energy windows. The MTF of the system was combined with the photon yield and the sensitivity, obtained from the simulations, to form the frequency-dependent DQE of the system. As figure-of-merit (FOM), the integral of the 2D DQE was used. The simulated DQE curves agreed well with published data. As expected, there was a strong dependency of the shape and magnitude of the DQE curve on the collimator, energy window and imaging position. The highest FOM was obtained for a lower energy threshold of 127 keV for objects close to the detector and 131 keV for objects deeper in the phantom, supporting an asymmetric window setting to reduce scatter. The Monte Carlo software SIMIND can be used to determine the DQE of a gamma camera system from a simulated point source alone. The optimal DQE results in the present study were obtained for parameter settings close to the clinically used settings.

  12. A robust method for online stereo camera self-calibration in unmanned vehicle system

    Science.gov (United States)

    Zhao, Yu; Chihara, Nobuhiro; Guo, Tao; Kimura, Nobutaka

    2014-06-01

    Self-calibration is a fundamental technology used to estimate the relative posture of the cameras for environment recognition in unmanned system. We focused on the issue of recognition accuracy decrease caused by the vibration of platform and conducted this research to achieve on-line self-calibration using feature point's registration and robust estimation of fundamental matrix. Three key factors in this respect are needed to be improved. Firstly, the feature mismatching exists resulting in the decrease of estimation accuracy of relative posture. The second, the conventional estimation method cannot satisfy both the estimation speed and calibration accuracy at the same tame. The third, some system intrinsic noises also lead greatly to the deviation of estimation results. In order to improve the calibration accuracy, estimation speed and system robustness for the practical implementation, we discuss and analyze the algorithms to make improvements on the stereo camera system to achieve on-line self-calibration. Based on the epipolar geometry and 3D images parallax, two geometry constraints are proposed to make the corresponding feature points search performed in a small search-range resulting in the improvement of matching accuracy and searching speed. Then, two conventional estimation algorithms are analyzed and evaluated for estimation accuracy and robustness. The third, Rigorous posture calculation method is proposed with consideration of the relative posture deviation of each separated parts in the stereo camera system. Validation experiments were performed with the stereo camera mounted on the Pen-Tilt Unit for accurate rotation control and the evaluation shows that our proposed method is fast and of high accuracy with high robustness for on-line self-calibration algorithm. Thus, as the main contribution, we proposed methods to solve the on-line self-calibration fast and accurately, envision the possibility for practical implementation on unmanned system as

  13. Crosswatch: a Camera Phone System for Orienting Visually Impaired Pedestrians at Traffic Intersections.

    Science.gov (United States)

    Ivanchenko, Volodymyr; Coughlan, James; Shen, Huiying

    2008-07-01

    Urban intersections are the most dangerous parts of a blind or visually impaired person's travel. To address this problem, this paper describes the novel "Crosswatch" system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia camera phone in real time, which automatically takes a few images per second, uses the cell phone's built-in computer to analyze each image in a fraction of a second and sounds an audio tone when it detects a crosswalk. Tests with blind subjects demonstrate the feasibility of the system and its ability to provide useful crosswalk alignment information under real-world conditions.

  14. Error control in the set-up of stereo camera systems for 3d animal tracking

    Science.gov (United States)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  15. A rehabilitation training system with double-CCD camera and automatic spatial positioning technique

    Science.gov (United States)

    Lin, Chern-Sheng; Wei, Tzu-Chi; Lu, An-Tsung; Hung, San-Shan; Chen, Wei-Lung; Chang, Chia-Chang

    2011-03-01

    This study aimed to develop a computer game for machine vision integrated rehabilitation training system. The main function of the system is to allow users to conduct hand grasp-and-place movement through machine vision integration. Images are captured by a double-CCD camera, and then positioned on a large screen. After defining the right, left, upper, and lower boundaries of the captured images, an automatic spatial positioning technique is employed to obtain their correlation functions, and lookup tables are defined for cameras. This system can provide rehabilitation courses and games that allow users to exercise grasp-and-place movements, in order to improve their upper limb movement control, trigger trunk control, and balance training.

  16. OBLIQUE MULTI-CAMERA SYSTEMS – ORIENTATION AND DENSE MATCHING ISSUES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2014-03-01

    Full Text Available The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.. The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  17. Development of a high resolution gamma camera system using finely grooved GAGG scintillator

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Seiichi [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine (Japan); Kataoka, Jun; Oshima, Tsubasa [Research Institute for Science and Engineering, Waseda University (Japan); Ogata, Yoshimune [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine (Japan); Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Hatazawa, Jun [Osaka University Graduate School of Medicine (Japan)

    2016-06-11

    High resolution gamma cameras require small pixel scintillator blocks with high light output. However, manufacturing a small pixel scintillator block is difficult when the pixel size becomes small. To solve this limitation, we developed a high resolution gamma camera system using a finely grooved Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (GAGG) plate. Our gamma camera's detector consists of a 1-mm-thick finely grooved GAGG plate that is optically coupled to a 1-in. position sensitive photomultiplier tube (PSPMT). The grooved GAGG plate has 0.2×0.2 mm pixels with 0.05-mm wide slits (between the pixels) that were manufactured using a dicing saw. We used a Hamamatsu PSPMT with a 1-in. square high quantum efficiency (HQE) PSPMT (R8900-100-C12). The energy resolution for the Co-57 gamma photons (122 keV) was 18.5% FWHM. The intrinsic spatial resolution was estimated to be 0.7-mm FWHM. With a 0.5-mm diameter pinhole collimator mounted to its front, we achieved a high resolution, small field-of-view gamma camera. The system spatial resolution for the Co-57 gamma photons was 1.0-mm FWHM, and the sensitivity was 0.0025%, 10 mm from the collimator surface. The Tc-99m HMDP administered mouse images showed the fine structures of the mouse body's parts. Our developed high resolution small pixel GAGG gamma camera is promising for such small animal imaging.

  18. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  19. Interconnected network of cameras

    Science.gov (United States)

    Hosseini Kamal, Mahdad; Afshari, Hossein; Leblebici, Yusuf; Schmid, Alexandre; Vandergheynst, Pierre

    2013-02-01

    The real-time development of multi-camera systems is a great challenge. Synchronization and large data rates of the cameras adds to the complexity of these systems as well. The complexity of such system also increases as the number of their incorporating cameras increases. The customary approach to implementation of such system is a central type, where all the raw stream from the camera are first stored then processed for their target application. An alternative approach is to embed smart cameras to these systems instead of ordinary cameras with limited or no processing capability. Smart cameras with intra and inter camera processing capability and programmability at the software and hardware level will offer the right platform for distributed and parallel processing for multi- camera systems real-time application development. Inter camera processing requires the interconnection of smart cameras in a network arrangement. A novel hardware emulating platform is introduced for demonstrating the concept of the interconnected network of cameras. A methodology is demonstrated for the interconnection network of camera construction and analysis. A sample application is developed and demonstrated.

  20. Portable 3D laser-camera calibration system with color fusion for SLAM

    Directory of Open Access Journals (Sweden)

    Javier Navarrete

    2013-03-01

    Full Text Available Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM, in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.

  1. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  2. The Calibration of High-Speed Camera Imaging System for ELMs Observation on EAST Tokamak

    Science.gov (United States)

    Fu, Chao; Zhong, Fangchuan; Hu, Liqun; Yang, Jianhua; Yang, Zhendong; Gan, Kaifu; Zhang, Bin; East Team

    2016-09-01

    A tangential fast visible camera has been set up in EAST tokamak for the study of edge MHD instabilities such as ELM. To determine the 3-D information from CCD images, Tsai's two-stage technique was utilized to calibrate the high-speed camera imaging system for ELM study. By applying tiles of the passive stabilizers in the tokamak device as the calibration pattern, transformation parameters for transforming from a 3-D world coordinate system to a 2-D image coordinate system were obtained, including the rotation matrix, the translation vector, the focal length and the lens distortion. The calibration errors were estimated and the results indicate the reliability of the method used for the camera imaging system. Through the calibration, some information about ELM filaments, such as positions and velocities were obtained from images of H-mode CCD videos. supported by National Natural Science Foundation of China (No. 11275047), the National Magnetic Confinement Fusion Science Program of China (No. 2013GB102000)

  3. A digital underwater video camera system for aquatic research in regulated rivers

    Science.gov (United States)

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  4. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  5. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    Directory of Open Access Journals (Sweden)

    Ki-Yeong Park

    2014-01-01

    Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  6. An Infrared Focal Plane Array Camera System for Stereo-based Radiometric Imaging

    Science.gov (United States)

    1999-01-01

    Plane Array Calibrated System ( FPACS ) utilizes several features to help ensure radiometric accuracy. Some features help minimize unwanted radiation...possible, and beyond that, the FPACS design ensures that the operator is made aware when operating conditions may lead to radiometric inaccuracies. Primary...components of FPACS are illustrated in Fig. 1. Components are 1) Optics, 2) FPA/Dewar, 3) Camera Electronics, 4) Pan & Tilt platform, and 4) Windows

  7. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    Science.gov (United States)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  8. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    Science.gov (United States)

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  9. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    Directory of Open Access Journals (Sweden)

    Tao Sun

    2015-03-01

    Full Text Available Multi-digital camera systems (MDCS are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90, is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II and proposed MDCS demonstrate that the latter (0.3 m provides spatial data with higher accuracy than the former (only 0.6 m under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.

  10. Design of an Optical Character Recognition System for Camera-based Handheld Devices

    Directory of Open Access Journals (Sweden)

    Ayatullah Faruk Mollah

    2011-07-01

    Full Text Available This paper presents a complete Optical Character Recognition (OCR system for camera captured image/graphics embedded textual documents for handheld devices. At first, text regions are extracted and skew corrected. Then, these regions are binarized and segmented into lines and characters. Characters are passed into the recognition module. Experimenting with a set of 100 business card images, captured by cell phone camera, we have achieved a maximum recognition accuracy of 92.74%. Compared to Tesseract, an open source desktop-based powerful OCR engine, present recognition accuracy is worth contributing. Moreover, the developed technique is computationally efficient and consumes low memory so as to be applicable on handheld devices.

  11. Conceptual design of a camera system for neutron imaging in low fusion power tokamaks

    Science.gov (United States)

    Xie, X.; Yuan, X.; Zhang, X.; Nocente, M.; Chen, Z.; Peng, X.; Cui, Z.; Du, T.; Hu, Z.; Li, T.; Fan, T.; Chen, J.; Li, X.; Zhang, G.; Yuan, G.; Yang, J.; Yang, Q.

    2016-02-01

    The basic principles for designing a camera system for neutron imaging in low fusion power tokamaks are illustrated for the case of the HL-2A tokamak device. HL-2A has an approximately circular cross section, with total neutron yields of about 1012 n/s under 1 MW neutral beam injection (NBI) heating. The accuracy in determining the width of the neutron emission profile and the plasma vertical position are chosen as relevant parameters for design optimization. Typical neutron emission profiles and neutron energy spectra are calculated by Monte Carlo method. A reference design is assumed, for which the direct and scattered neutron fluences are assessed and the neutron count profile of the neutron camera is obtained. Three other designs are presented for comparison. The reference design is found to have the best performance for assessing the width of peaked to broadened neutron emission profiles. It also performs well for the assessment of the vertical position.

  12. The electronics system for the LBNL positron emission mammography (PEM) camera

    CERN Document Server

    Moses, W W; Baker, K; Jones, W; Lenox, M; Ho, M H; Weng, M

    2001-01-01

    Describes the electronics for a high-performance positron emission mammography (PEM) camera. It is based on the electronics for a human brain positron emission tomography (PET) camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An application-specified integrated circuit (ASIC) services the photodetector (PD) array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field-programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal-by-crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs make the overall design extremely flexible, all...

  13. The Fly's Eye Camera System -- an instrument design for large \\'etendue time-domain survey

    CERN Document Server

    Pál, András; Csépány, Gergely; Jaskó, Attila; Schlaffer, Ferenc; Vida, Krisztián; Mező, György; Döbrentei, László; Farkas, Ernő; Kiss, Csaba; Oláh, Katalin; Regály, Zsolt

    2013-01-01

    In this paper we briefly summarize the design concepts of the Fly's Eye Camera System, a proposed high resolution all-sky monitoring device which intends to perform high cadence time domain astronomy in multiple optical passbands while still accomplish a high \\'etendue. Fundings have already been accepted by the Hungarian Academy of Sciences in order to design and build a Fly's Eye device unit. Beyond the technical details and the actual scientific goals, this paper also discusses the possibilities and yields of a network operation involving ~10 sites distributed geographically in a nearly homogeneous manner. Currently, we expect to finalize the mount assembly -- that performs the sidereal tracking during the exposures -- until the end of 2012 and to have a working prototype with a reduced number of individual cameras sometimes in the spring or summer of 2013.

  14. The Fly's Eye Camera System -- an instrument design for large \\'etendue time-domain survey

    CERN Document Server

    Csépány, Gergely; Vida, Krisztián; Regály, Zsolt; Mészáros, László; Oláh, Katalin; Kiss, Csaba; Döbrentei, László; Jaskó, Attila; Mező, György; Farkas, Ernő

    2014-01-01

    In this paper we briefly summarize the design concepts of the Fly's Eye Camera System, a proposed high resolution all-sky monitoring device which intends to perform high cadence time domain astronomy in multiple optical passbands while still accomplish a high \\'etendue. Fundings have already been accepted by the Hungarian Academy of Sciences in order to design and build a Fly's Eye device unit. Beyond the technical details and the actual scientific goals, this paper also discusses the possibilities and yields of a network operation involving $\\sim10$ sites distributed geographically in a nearly homogeneous manner. Currently, we expect to finalize the mount assembly -- that performs the sidereal tracking during the exposures -- until the end of 2012 and to have a working prototype with a reduced number of individual cameras sometimes in the spring or summer of 2013.

  15. The Next Generation Microlensing Search: SuperMacho

    Energy Technology Data Exchange (ETDEWEB)

    Drake, A; Cook, K; Hiriart, R; Keller, S; Miknaitis, G; Nilolaev, S; Olsen, K; Prochter, G; Rest, A; Schmidt, B; Smith, C; Stubbs, C; Suntzeff, N; Welch, D; Becker, A; Clocchiati, A; Covarrubias, R

    2003-10-27

    Past microlensing experiments such as the MACHO project have discovered the presence of a larger than expected number of microlensing events toward the Large Magellanic Cloud (LMC). These events could represent a large fraction of the dark matter in the halo of our Galaxy, if they are indeed due to halo lenses. However the locations of most of the lenses are poorly defined. The SuperMacho project will detect and follow up {approx}60 microlensing events exhibiting special properties due to binarity, etc., will allow us to better determine the location and nature of the lenses causing the LMC microlensing events.

  16. Experimental Characterization of Close-Emitter Interference in an Optical Camera Communication System

    Science.gov (United States)

    Chavez-Burbano, Patricia; Rabadan, Jose; Perez-Jimenez, Rafael

    2017-01-01

    Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios. PMID:28677613

  17. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    Directory of Open Access Journals (Sweden)

    Tao Yang

    2016-08-01

    Full Text Available This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV during a landing process. The system mainly include three novel parts: (1 Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2 Large scale outdoor camera array calibration module; and (3 Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS-denied environments.

  18. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    Science.gov (United States)

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  19. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    Science.gov (United States)

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments. PMID:27589755

  20. Security Camera System can be access into mobile with internet from remote place

    Directory of Open Access Journals (Sweden)

    Dr. Khanna SamratVivekanand Omprakash

    2012-01-01

    Full Text Available This paper represents how camera can captured the images and video into the database and then it may transformed to the mobile with help of Internet. Developing mobile applications how the data can be viewed on the mobile from the remote place. By assigning real IP to the storage device from ISP and connected to the internet . Developing mobile applications on windows mobile which runs only on the windows mobile . Wireless camera in terms of 4 , 8, 12, 16 are connected with the system. Windows based application develop for 4 , 8 , 12,16 channels to see at a time on desktop computer . The PC is connected with internet and having Client server application which is connected to the Windows Web hosting Server through the internet. With the help of ISP server we can assign IP to the Window Web Server with domain name . Domain name will be access from the world. By developing mobile applications on web we can access it on mobile . Separate setup of windows .exe develop for the Windows Mobile phone to access the information from the server. Client setup can be installed on the mobile and it fetches the data from server and server is based on real IP with domain name and connected with Internet . Digital Wireless cameras are connected & data is stored in Digital Video Recorder having 1 Terabyte of hard disk with different channel like 4, 8, 12,16. We can see Video output in mobile by installing the client setup or by accessing directly from web browser which supports the application for mobile. The beauty of this software is that we can access security camera system into the mobile with internet from remote place.

  1. Infrared Camera System for Visualization of IR-Absorbing Gas Leaks

    Science.gov (United States)

    Youngquist, Robert; Immer, Christopher; Cox, Robert

    2010-01-01

    embodiment would use a ratioed output signal to better represent the gas column concentration. An alternative approach uses a simpler multiplication of the filtered signal to make the filtered signal equal to the unfiltered signal at most locations, followed by a subtraction to remove all but the wavelength-specific absorption in the unfiltered sample. This signal processing can also reveal the net difference signal representing the leaking gas absorption, and allow rapid leak location, but signal intensity would not relate solely to gas absorption, as raw signal intensity would also affect the displayed signal. A second design choice is whether to use one camera with two images closely spaced in time, or two cameras with essentially the same view and time. The figure shows the two-camera version. This choice involves many tradeoffs that are not apparent until some detailed testing is done. In short, the tradeoffs involve the temporal changes in the field picture versus the pixel sensitivity curves and frame alignment differences with two cameras, and which system would lead to the smaller variations from the uncontrolled variables.

  2. A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

    Directory of Open Access Journals (Sweden)

    M. Hassanein

    2016-06-01

    Full Text Available In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated

  3. a New Automatic System Calibration of Multi-Cameras and LIDAR Sensors

    Science.gov (United States)

    Hassanein, M.; Moussa, A.; El-Sheimy, N.

    2016-06-01

    In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated calibration without

  4. 3D digital image correlation using single color camera pseudo-stereo system

    Science.gov (United States)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  5. Study and Monitoring of Itinerant Tourism along the Francigena Route, by Camera Trapping System

    Directory of Open Access Journals (Sweden)

    Gianluca Bambi

    2017-01-01

    Full Text Available Tourism along the Via Francigena is a growing phenomenon. It is important to develop a direct survey of path’s users (pilgrims, tourists travel, day-trippers, etc. able to define user’s profiles, phenomenon extent, and its evolution over time. This in order to develop possible actions to promote the socio-economic impact on rural areas concerned. With this research, we propose the creation of a monitoring network based on camera trapping system to estimate the number of tourists in a simple and expeditious way. Recently, the camera trapping, as well as the faunal field, is finding wide use even in population surveys. An innovative application field is the one in the tourist sector, becoming the basis of statistical and planning analysis. To carry out a survey of the pilgrims/tourists, we applied this type of sampling method. It is an interesting method since it allows to obtain data about type and number of users. The application of camera trapping along the Francigena allows to obtain several information about users profiles, such as sex, age, average lengths of pilgrimages, type of journey (by foot, by horseback or by bike, in a continuous time period distributed in the tourist months of the 2014.

  6. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    Science.gov (United States)

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  7. Cryogenic system for the ArTeMiS large sub millimeter camera

    Science.gov (United States)

    Ercolani, E.; Relland, J.; Clerc, L.; Duband, L.; Jourdan, T.; Talvard, M.; Le Pennec, J.; Martignac, J.; Visticot, F.

    2014-07-01

    A new photonic camera has been developed in the framework of the ArTéMis project (Bolometers architecture for large field of view ground based telescopes in the sub-millimeter). This camera scans the sky in the sub-millimeter range at simultaneously three different wavelengths, namely 200 μm, 350 μm, 450 μm, and is installed inside the APEX telescope located at 5100m above sea level in Chile. Bolometric detectors cooled to 300 mK are used in the camera, which is integrated in an original cryostat developed at the low temperature laboratory (SBT) of the INAC institut. This cryostat contains filters, optics, mirrors and detectors which have to be implemented according to mass, size and stiffness requirements. As a result the cryostat exhibits an unusual geometry. The inner structure of the cryostat is a 40 K plate which acts as an optical bench and is bound to the external vessel through two hexapods, one fixed and the other one mobile thanks to a ball bearing. Once the cryostat is cold, this characteristic enabled all the different elements to be aligned with the optical axis. The cryogenic chain is built around a pulse tube cooler (40 K and 4 K) coupled to a double stage helium sorption cooler (300 mK). The cryogenic and vacuum processes are managed by a Siemens PLC and all the data are showed and stored on a CEA SCADA system. This paper describes the mechanical and thermal design of the cryostat, its command control, and the first thermal laboratory tests. This work was carried out in collaboration with the Astrophysics laboratory SAp of the IRFU institut. SAp and SBT have installed the camera in July 2013 inside the Cassegrain cabin of APEX.

  8. Autonomous Gait Event Detection with Portable Single-Camera Gait Kinematics Analysis System

    Directory of Open Access Journals (Sweden)

    Cheng Yang

    2016-01-01

    Full Text Available Laboratory-based nonwearable motion analysis systems have significantly advanced with robust objective measurement of the limb motion, resulting in quantified, standardized, and reliable outcome measures compared with traditional, semisubjective, observational gait analysis. However, the requirement for large laboratory space and operational expertise makes these systems impractical for gait analysis at local clinics and homes. In this paper, we focus on autonomous gait event detection with our bespoke, relatively inexpensive, and portable, single-camera gait kinematics analysis system. Our proposed system includes video acquisition with camera calibration, Kalman filter + Structural-Similarity-based marker tracking, autonomous knee angle calculation, video-frame-identification-based autonomous gait event detection, and result visualization. The only operational effort required is the marker-template selection for tracking initialization, aided by an easy-to-use graphic user interface. The knee angle validation on 10 stroke patients and 5 healthy volunteers against a gold standard optical motion analysis system indicates very good agreement. The autonomous gait event detection shows high detection rates for all gait events. Experimental results demonstrate that the proposed system can automatically measure the knee angle and detect gait events with good accuracy and thus offer an alternative, cost-effective, and convenient solution for clinical gait kinematics analysis.

  9. Stereo camera-based intelligent UGV system for path planning and navigation

    Science.gov (United States)

    Lee, Jung-Suk; Ko, Jung-Hwan; Chungb, Dal-Do

    2006-08-01

    In this paper, a new real-time and intelligent mobile robot system for path planning and navigation using stereo camera embedded on the pan/tilt system is proposed. In the proposed system, face area of a moving person is detected from a sequence of the stereo image pairs by using the YCbCr color model and using the disparity map obtained from the left and right images captured by the pan/tilt-controlled stereo camera system and depth information can be detected. And then, the distance between the mobile robot system and the face of the moving person can be calculated from the detected depth information. Accordingly, based-on the analysis of these data, three-dimensional objects can be detected. Finally, by using these detected data, 2-D spatial map for a visually guided robot that can plan paths, navigate surrounding objects and explore an indoor environment is constructed. From some experiments on target tracking with 480 frames of the sequential stereo images, it is analyzed that error ratio between the calculated and measured values of the relative position is found to be very low value of 1.4 % on average. Also, the proposed target tracking system has achieved a high speed of 0.04 sec/frame for target detection and 0.06 sec/frame for target tracking.

  10. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    Science.gov (United States)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  11. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    Directory of Open Access Journals (Sweden)

    Idowu Ayoola

    2015-09-01

    Full Text Available A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.

  12. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    Energy Technology Data Exchange (ETDEWEB)

    Andreozzi, Jacqueline M., E-mail: Jacqueline.M.Andreozzi.th@dartmouth.edu; Glaser, Adam K. [Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755 (United States); Zhang, Rongxiao [Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire 03755 (United States); Jarvis, Lesley A.; Gladstone, David J. [Department of Medicine, Geisel School of Medicine and Norris Cotton Cancer Center, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire 03766 (United States); Pogue, Brian W., E-mail: Brian.W.Pogue@dartmouth.edu [Thayer School of Engineering and Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire 03755 (United States)

    2015-02-15

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary

  13. 双相机多点触控系统%Multi-touch System Using Two Cameras

    Institute of Scientific and Technical Information of China (English)

    赵斌陶; 陈靖; 刘越; 王涌天

    2012-01-01

    To tackle the problems of high cost, low resolution, bulk mass and easily being affected by ambient light of Multi-touch system, a dual-camera multi-touch system was proposed which could achieve human-computer interaction by simply clicking and moving the fingers on the interaction surface. The proposed system detected the input of fingers by two cameras, obtained the positions of Multi-points in two cameras by image processing, and calculated the coordinates in screen coordinate by combining with calibration data, and finally obtained the Multi-points data by tracking the touch points. The experimental results of photo interactive platform show that this system can be used for Multi-touch with good accuracy.%为解决多点触控系统花费高、分辨率不足、体积庞大、容易受环境光影响等问题,提出了一种采用双摄像机的多点系统,通过多手指的点击和移动动作实现人机交互。该系统以双摄像机监控交互平面上的手指输入,利用图像处理技术得到触点在双相机中的位置,再结合标定数据进行差值计算得到触点的屏幕坐标,通过对触点的跟踪得到多点数据。图片浏览互动平台的实验结果表明,本文所提出的系统可用于多点触摸,并且具有很好的精确度。

  14. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Directory of Open Access Journals (Sweden)

    J. Mejia

    2010-12-01

    Full Text Available The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target’s three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology.

  15. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y., E-mail: mejia_famerp@yahoo.com.b [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Biologia Molecular; Castro, A.A. de; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica; Leite, J.P. [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Fac. de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Braga, J. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Astrofisica

    2010-11-15

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology. (author)

  16. Introduction of a Photogrammetric Camera System for Rpas with Highly Accurate Gnss/imu Information for Standardized Workflows

    Science.gov (United States)

    Kraft, T.; Geßner, M.; Meißner, H.; Przybilla, H. J.; Gerke, M.

    2016-03-01

    In this paper we present the evaluation of DLR's modular airborne camera system MACS-Micro for remotely piloted aircraft system (RPAS) with a maximum takeoff weight (MTOW) less than 5kg. The main focus is on standardized calibration and test procedures as well as on standardized photogrammetric workflows as a proof of feasibility for this aerial camera concept. The prototype consists of an industrial grade frame imaging camera and a compact GNSS/IMU solution which are operated by an embedded PC. The camera has been calibrated pre- and post- flight using a three dimensional test field. The validation of the latest prototype is done by a traditional photogrammetric evaluation of an aerial survey using 39 ground control points. The results, concerning geometric and radiometric features of the present system concept as well as the quality of the aero triangulation, fulfill many of the aimed keyspecifications.

  17. STUDY ON THE LINE SCAN CCD CAMERA CALIBRATION OF VEHICLE-BORNE 3D DATA ACQUISITION SYSTEM

    OpenAIRE

    Han, Y; Yang, B.; F. Zhang

    2012-01-01

    Based on the characters of the line scan CCD camera and the Vehicle-borne 3D data acquisition system, it presented a novel method to calibrate the line Scan Camera (LSC) based on the laser scanner. Using the angle information in the original laser scanner data, combing the principle of the line scan camera, it built a calibration model for LSC and designed some experiment methods to implement that. Using the new model and the special experiment methods it computed out high precision ...

  18. Absolute phase-assisted three-dimensional data registration for a dual-camera structured light system.

    Science.gov (United States)

    Zhang, Song; Yau, Shing-Tung

    2008-06-10

    For a three-dimensional shape measurement system with a single projector and multiple cameras, registering patches from different cameras is crucial. Registration usually involves a complicated and time-consuming procedure. We propose a new method that can robustly match different patches via absolute phase without significantly increasing its cost. For y and z coordinates, the transformations from one camera to the other are approximated as third-order polynomial functions of the absolute phase. The x coordinates involve only translations and scalings. These functions are calibrated and only need to be determined once. Experiments demonstrated that the alignment error is within RMS 0.7 mm.

  19. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  20. D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System

    Science.gov (United States)

    Kang, J.; Lee, I.

    2016-06-01

    Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  1. Deployable Camera (DCAM3) System for Observation of Hayabusa2 Impact Experiment

    Science.gov (United States)

    Sawada, Hirotaka; Ogawa, Kazunori; Shirai, Kei; Kimura, Shinichi; Hiromori, Yuichi; Mimasu, Yuya

    2017-07-01

    An asteroid exploration probe "Hayabusa2", that was developed by Japan Aerospace Exploration Agency (JAXA), was launched on December 3rd, 2014 to challenge complicated and accurate operations during the mission phase around the C-type asteroid 162137 Ryugu (1999 JU3) (Tsuda et al. in Acta Astron. 91:356-362, 2013). An impact experiment on a surface of the asteroid will be conducted using the Small Carry-on Impactor (SCI) system, which will be the world's first artificial crater creation experiment on asteroids (Saiki et al. in Proc. International Astronautical Congress, IAC-12.A3.4.8, 2012, Acta Astron. 84:227-236, 2013a; Proc. International Symposium on Space Technology and Science, 2013b). We developed a new micro Deployable CAMera (DCAM3) system for remote observations of the impact phenomenon applying our conventional DCAM technology that is one of the smallest probes in space missions and gained a great success in past Japanese mission IKAROS (Interplanetary Kite-craft Accelerated by Radiation Of the Sun). DCAM3 is a miniaturized separable unit that contains two cameras and radio communication devices for transmission image data to the mothership "Hayabusa2", and it observes the impact experiment at an unsafe region in where the "Hayabusa2" is difficult to stay because of a risk of exploding and impacting debris hitting. In this paper, we report details of the DCAM3 system and development results as well as our mission plan for the DCAM3 observation during the SCI experiment.

  2. KASINICS: Near Infrared Camera System for the BOAO 1.8m Telescope

    Science.gov (United States)

    Moon, Bongkon; Jin, Ho; Yuk, In-Soo; Lee, Sungho; Nam, Uk-Won; Cha, Sang-Mok; Cho, Seoung-Hyun; Kyeong, Jae-Mann; Park, Youngsik; Mock, Seungwon; Han, Jeong-Yeol; Lee, Dea-Hee; Park, Jang-Hyun; Han, Wonyong; Pak, Soojong; Kim, Geon-Hee; Kim, Yong Ha

    2008-08-01

    We developed Korea Astronomy and Space science Institute (KASI) Near Infrared Camera System (KASINICS) to be installed on the 1.8m telescope of Bohyunsan Optical Astronomy Observatory (BOAO) in Korea. We use a 512×512 InSb array (Aladdin III Quadrant, Raytheon Co.) to enable L band observations as well as J, H, and KS bands. The field-of-view of the array is 3.3'× 3.3' with a resolution of 0.39''/ pixel. We adopt an Offner relay optical system, which provides a cold stop to eliminate thermal background emission from the telescope structures. Most parts of the camera, including the mirrors, were manufactured from the same ingot of aluminum alloy to ensure a homologous contraction from room temperature to 80K. We also developed a readout electronics system for the array detector. Based on preliminary results from test observations, the limiting magnitudes are J = 17.6, H = 17.5, KS = 16.1, and L(narrow) = 10.0mag at a signal-to-noise ratio of 10 for an integration time of 100s.

  3. Deployable Camera (DCAM3) System for Observation of Hayabusa2 Impact Experiment

    Science.gov (United States)

    Sawada, Hirotaka; Ogawa, Kazunori; Shirai, Kei; Kimura, Shinichi; Hiromori, Yuichi; Mimasu, Yuya

    2017-02-01

    An asteroid exploration probe "Hayabusa2", that was developed by Japan Aerospace Exploration Agency (JAXA), was launched on December 3rd, 2014 to challenge complicated and accurate operations during the mission phase around the C-type asteroid 162137 Ryugu (1999 JU3) (Tsuda et al. in Acta Astron. 91:356-362, 2013). An impact experiment on a surface of the asteroid will be conducted using the Small Carry-on Impactor (SCI) system, which will be the world's first artificial crater creation experiment on asteroids (Saiki et al. in Proc. International Astronautical Congress, IAC-12.A3.4.8, 2012, Acta Astron. 84:227-236, 2013a; Proc. International Symposium on Space Technology and Science, 2013b). We developed a new micro Deployable CAMera (DCAM3) system for remote observations of the impact phenomenon applying our conventional DCAM technology that is one of the smallest probes in space missions and gained a great success in past Japanese mission IKAROS (Interplanetary Kite-craft Accelerated by Radiation Of the Sun). DCAM3 is a miniaturized separable unit that contains two cameras and radio communication devices for transmission image data to the mothership "Hayabusa2", and it observes the impact experiment at an unsafe region in where the "Hayabusa2" is difficult to stay because of a risk of exploding and impacting debris hitting. In this paper, we report details of the DCAM3 system and development results as well as our mission plan for the DCAM3 observation during the SCI experiment.

  4. Unmanned Aerial Vehicle (UAV) operated spectral camera system for forest and agriculture applications

    Science.gov (United States)

    Saari, Heikki; Pellikka, Ismo; Pesonen, Liisa; Tuominen, Sakari; Heikkilä, Jan; Holmlund, Christer; Mäkynen, Jussi; Ojala, Kai; Antila, Tapani

    2011-11-01

    VTT Technical Research Centre of Finland has developed a Fabry-Perot Interferometer (FPI) based hyperspectral imager compatible with the light weight UAV platforms. The concept of the hyperspectral imager has been published in the SPIE Proc. 7474 and 7668. In forest and agriculture applications the recording of multispectral images at a few wavelength bands is in most cases adequate. The possibility to calculate a digital elevation model of the forest area and crop fields provides means to estimate the biomass and perform forest inventory. The full UAS multispectral imaging system will consist of a high resolution false color imager and a FPI based hyperspectral imager which can be used at resolutions from VGA (480 x 640 pixels) up to 5 Mpix at wavelength range 500 - 900 nm at user selectable spectral resolutions in the range 10...40 nm @ FWHM. The resolution is determined by the order at which the Fabry- Perot interferometer is used. The overlap between successive images of the false color camera is 70...80% which makes it possible to calculate the digital elevation model of the target area. The field of view of the false color camera is typically 80 degrees and the ground pixel size at 150 m flying altitude is around 5 cm. The field of view of the hyperspectral imager is presently is 26 x 36 degrees and ground pixel size at 150 m flying altitude is around 3.5 cm. The UAS system has been tried in summer 2011 in Southern Finland for the forest and agricultural areas. During the first test campaigns the false color camera and hyperspectral imager were flown over the target areas at separate flights. The design and calibration of the hyperspectral imager will be shortly explained. The test flight campaigns on forest and crop fields and their preliminary results are also presented in this paper.

  5. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  6. JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.

    Science.gov (United States)

    Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun

    2017-03-01

    Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.

  7. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  8. Single camera imaging system for color and near-infrared fluorescence image guided surgery.

    Science.gov (United States)

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-08-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 μM can be detected when the excitation irradiance is 3.92 mW/cm(2) at an exposure time of 10 ms.

  9. Development of a Diver-Operated Single Camera Volumetric Velocimetry System

    Science.gov (United States)

    Troutman, Valerie; Dabiri, John

    2016-11-01

    The capabilities of a single camera, volumetric velocimetry system for in situ measurement in marine environments are demonstrated by imaging a well-characterized flow in a laboratory environment. This work represents the first stages in the design of a SCUBA-diver operated system to study organisms and biological processes under the natural light in the water column. This system is primarily composed of a volumetric particle tracking diagnostic to investigate fluid-animal interactions. A target domain size of a 20 cm sided cube is sought as a key design feature for the capability of capturing the flow around a variety of benthic and freely swimming organisms. The integration of the particle tracking system with additional diagnostics will be discussed.

  10. Design of EMCCD Driving System for Underwater Low-Light Camera

    Directory of Open Access Journals (Sweden)

    Hou Yuchen

    2016-01-01

    Full Text Available For the core of underwater low-light camera, i.e. EMCCD, the whole solution of driving system is presented in this paper. The timing signals which meet the driving requirements of EMCCD are produced from FPGA. The high-speed integrated driver chips convert the timing signals into general power driving signals with the amplitude under 12V; a Class A push-pull amplifier circuit built in discrete components transforms the timing signals into high-voltage multiplying signals. Besides, the impedance matching for the circuit optimizes the driving system. The experimental results indicate that the driving system generates the high-voltage multiplying signals, with their high level adjusted from 30V to 48V and frequency raised up to 10MHz. The whole driving system satisfies the requirements of EMCCD, and has the capability to ensure the normal operation of EMCCD.

  11. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    Science.gov (United States)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  12. Dual-camera system for high-speed imaging in particle image velocimetry

    CERN Document Server

    Hashimoto, K; Hara, T; Onogi, S; Mouri, H

    2012-01-01

    Particle image velocimetry is an important technique in experimental fluid mechanics, for which it has been essential to use a specialized high-speed camera. However, the high speed is at the expense of other performances of the camera, i.e., sensitivity and image resolution. Here, we demonstrate that the high-speed imaging is also possible with a pair of still cameras.

  13. Motion Detection Notification System by Short Messaging Service Using Network Camera and Global System for Mobile Modem

    CERN Document Server

    Mohd, Mohd Norzali Haji; Ariffin, Siti Khairulnisa

    2010-01-01

    As the technology rapidly grows, the trend is clear that the use of mobile devices is gain an attention nowadays, thus designing a system by integrating it with notification feature is becoming an important aspect especially in tracking and monitoring system. Conventional security surveillance systems require the constant attention from the user, to monitor the location concurrently. In order to reduce the cost of computing power and advance technology of mobile phone in widespread acceptance of the Internet as a viable communication medium, this paper is aimed to design a low cost web-based system as a platform to view the image captured. When the network camera detects any movement from the intruders, it automatically captures the image and sends it to the database of the web-based directly by the network through File Transfer Protocol (FTP). The camera is attached through an Ethernet connection and power source. Therefore, the camera can be viewed from either standard Web browser or cell phone. Nowadays, w...

  14. Evaluation of range parameters of the cameras for security system protecting the selected critical infrastructure of seaport

    Science.gov (United States)

    Kastek, Mariusz; Barela, Jaroslaw; Zyczkowski, Marek; Dulski, Rafal; Trzaskawka, Piotr; Firmanty, Krzysztof; Kucharz, Juliusz

    2012-10-01

    There are many separated infrastructural objects within a harbor area that may be considered "critical", such as gas and oil terminals or anchored naval vessels. Those objects require special protection, including security systems capable of monitoring both surface and underwater areas, because an intrusion into the protected area may be attempted using small surface vehicles (boats, kayaks, rafts, floating devices with weapons and explosives) as well as underwater ones (manned or unmanned submarines, scuba divers). The cameras used in security systems operate in several different spectral ranges in order to improve the probability of detection of incoming objects (potential threats). The cameras should then have adequate range parameters for detection, recognition and identification and those parameters, both measured and obtained through numerical simulations, will be presented in the paper. The range parameters of thermal cameras were calculated using NVTherm software package. Parameters of four observation thermal cameras were also measured on a specialized test stand at Institute of Optoelectronics, MUT. This test stand makes it also possible to test visual cameras. The parameters of five observation cameras working in the visual range were measured and on the basis of those data the detection, recognition and identification ranges were determined. The measurement results and simulation data will be compared. The evaluation of range parameters obtained for the tested camera types will define their usability in the real security system for the protection of selected critical infrastructure of a seaport.small surface objects (such as RIB boats) by a camera system and real test results in various weather conditions will also be presented.

  15. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    Science.gov (United States)

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which

  16. Design study for a 16x zoom lens system for visible surveillance camera

    Science.gov (United States)

    Vella, Anthony; Li, Heng; Zhao, Yang; Trumper, Isaac; Gandara-Montano, Gustavo A.; Xu, Di; Nikolov, Daniel K.; Chen, Changchen; Brown, Nicolas S.; Guevara-Torres, Andres; Jung, Hae Won; Reimers, Jacob; Bentley, Julie

    2015-09-01

    *avella@ur.rochester.edu Design study for a 16x zoom lens system for visible surveillance camera Anthony Vella*, Heng Li, Yang Zhao, Isaac Trumper, Gustavo A. Gandara-Montano, Di Xu, Daniel K. Nikolov, Changchen Chen, Nicolas S. Brown, Andres Guevara-Torres, Hae Won Jung, Jacob Reimers, Julie Bentley The Institute of Optics, University of Rochester, Wilmot Building, 275 Hutchison Rd, Rochester, NY, USA 14627-0186 ABSTRACT High zoom ratio zoom lenses have extensive applications in broadcasting, cinema, and surveillance. Here, we present a design study on a 16x zoom lens with 4 groups (including two internal moving groups), designed for, but not limited to, a visible spectrum surveillance camera. Fifteen different solutions were discovered with nearly diffraction limited performance, using PNPX or PNNP design forms with the stop located in either the third or fourth group. Some interesting patterns and trends in the summarized results include the following: (a) in designs with such a large zoom ratio, the potential of locating the aperture stop in the front half of the system is limited, with ray height variations through zoom necessitating a very large lens diameter; (b) in many cases, the lens zoom motion has significant freedom to vary due to near zero total power in the middle two groups; and (c) we discuss the trade-offs between zoom configuration, stop location, packaging factors, and zoom group aberration sensitivity.

  17. First results from the permanent SO2 Camera system at Stromboli

    Science.gov (United States)

    Salerno, Giuseppe G.; Burton, Mike; Caltabiano, Tommaso; D'Auria, Luca; Maugeri, Roberto; Mure, Filippo

    2015-04-01

    Since the 1980's volcano monitoring has undergone stunning changes, evolving from descriptive and sparse observations to a systematic-quantitative approach of science and technology. Surveillance of chemical gas composition and their emission rate is a vital part of efforts in interpreting volcanic activity of observatories since their changes are closely linked with seismicity and deformation swings. In this unruly technology progression, volcanic gas sensing observations have also undergone a profound revolution, for example by increasing observation frequency of SO2 flux from a few samples per day to Hz. In May 2013, a permanent-robotic SO2 dual-camera system was installed by the Istituto Nazionale di Geofisica e Vulcanologia at Stromboli as a part of the ultraviolet scanning spectrometers network FLAME, with the intent to underpin the geochemical surveillance and shed light on degassing and volcanic processes. Here, we present the first results of SO2 flux observed by the permanent SO2 camera system in the period between May 2013 and April 2015. Results are corroborated with the well established FLAME ultraviolet scanning network and also compared with VLP signals from the seismic network.

  18. A new track inspection car based on a laser camera system

    Institute of Scientific and Technical Information of China (English)

    Shengwei Ren; Shiping Gu; Guiyang Xu; Zhan Gao; Qibo Feng

    2011-01-01

    @@ We develop and build a new type of inspection car.A beam that is not rigidly connected to the train axle boxes and can absorb the vibration and impact caused by the high speed train is used, and a laser-camera measurement system based on the machine vision method is adopted.This method projects structural light onto the track and measures gauge and longitudinal irregularity.The measurement principle and model are discussed.Through numerous practical experiments, the rebuilt car is found to considerably eliminate the measurement errors caused by vibration and impact, thereby increasing measurement stability under high speeds.This new kind of inspection cars have been used in several Chinese administration bureaus.%We develop and build a new type of inspection car. A beam that is not rigidly connected to the train axle boxes and can absorb the vibration and impact caused by the high speed train is used, and a laser-camera measurement system based on the machine vision method is adopted. This method projects structural light onto the track and measures gauge and longitudinal irregularity. The measurement principle and model are discussed. Through numerous practical experiments, the rebuilt car is found to considerably eliminate the measurement errors caused by vibration and impact, thereby increasing measurement stability under high speeds. This new kind of inspection cars have been used in several Chinese administration bureaus.

  19. Airborne Camera System for Real-Time Applications - Support of a National Civil Protection Exercise

    Science.gov (United States)

    Gstaiger, V.; Romer, H.; Rosenbaum, D.; Henkel, F.

    2015-04-01

    In the VABENE++ project of the German Aerospace Center (DLR), powerful tools are being developed to aid public authorities and organizations with security responsibilities as well as traffic authorities when dealing with disasters and large public events. One focus lies on the acquisition of high resolution aerial imagery, its fully automatic processing, analysis and near real-time provision to decision makers in emergency situations. For this purpose a camera system was developed to be operated from a helicopter with light-weight processing units and microwave link for fast data transfer. In order to meet end-users' requirements DLR works close together with the German Federal Office of Civil Protection and Disaster Assistance (BBK) within this project. One task of BBK is to establish, maintain and train the German Medical Task Force (MTF), which gets deployed nationwide in case of large-scale disasters. In October 2014, several units of the MTF were deployed for the first time in the framework of a national civil protection exercise in Brandenburg. The VABENE++ team joined the exercise and provided near real-time aerial imagery, videos and derived traffic information to support the direction of the MTF and to identify needs for further improvements and developments. In this contribution the authors introduce the new airborne camera system together with its near real-time processing components and share experiences gained during the national civil protection exercise.

  20. Parkinson's disease assessment based on gait analysis using an innovative RGB-D camera system.

    Science.gov (United States)

    Rocha, Ana Patrícia; Choupina, Hugo; Fernandes, José Maria; Rosas, Maria José; Vaz, Rui; Silva Cunha, João Paulo

    2014-01-01

    Movement-related diseases, such as Parkinson's disease (PD), progressively affect the motor function, many times leading to severe motor impairment and dramatic loss of the patients' quality of life. Human motion analysis techniques can be very useful to support clinical assessment of this type of diseases. In this contribution, we present a RGB-D camera (Microsoft Kinect) system and its evaluation for PD assessment. Based on skeleton data extracted from the gait of three PD patients treated with deep brain stimulation and three control subjects, several gait parameters were computed and analyzed, with the aim of discriminating between non-PD and PD subjects, as well as between two PD states (stimulator ON and OFF). We verified that among the several quantitative gait parameters, the variance of the center shoulder velocity presented the highest discriminative power to distinguish between non-PD, PD ON and PD OFF states (p = 0.004). Furthermore, we have shown that our low-cost portable system can be easily mounted in any hospital environment for evaluating patients' gait. These results demonstrate the potential of using a RGB-D camera as a PD assessment tool.

  1. Handbook of camera monitor systems the automotive mirror-replacement technology based on ISO 16505

    CERN Document Server

    2016-01-01

    This handbook offers a comprehensive overview of Camera Monitor Systems (CMS), ranging from the ISO 16505-based development aspects to practical realization concepts. It offers readers a wide-ranging discussion of the science and technology of CMS as well as the human-interface factors of such systems. In addition, it serves as a single reference source with contributions from leading international CMS professionals and academic researchers. In combination with the latest version of UN Regulation No. 46, the normative framework of ISO 16505 permits CMS to replace mandatory rearview mirrors in series production vehicles. The handbook includes scientific and technical background information to further readers’ understanding of both of these regulatory and normative texts. It is a key reference in the field of automotive CMS for system designers, members of standardization and regulation committees, engineers, students and researchers.

  2. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    Science.gov (United States)

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  3. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  4. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs.

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-05-07

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.

  5. Measurement of Separated Flow Structures Using a Multiple-Camera DPIV System. [conducted in the Langley Subsonic Basic Research Tunnel

    Science.gov (United States)

    Humphreys, William M., Jr.; Bartram, Scott M.

    2001-01-01

    A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.

  6. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  7. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    Energy Technology Data Exchange (ETDEWEB)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji [Department of Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan)

    2016-04-15

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.

  8. Development of a high-speed CT imaging system using EMCCD camera

    Science.gov (United States)

    Thacker, Samta C.; Yang, Kai; Packard, Nathan; Gaysinskiy, Valeriy; Burkett, George; Miller, Stuart; Boone, John M.; Nagarkar, Vivek

    2009-02-01

    The limitations of current CCD-based microCT X-ray imaging systems arise from two important factors. First, readout speeds are curtailed in order to minimize system read noise, which increases significantly with increasing readout rates. Second, the afterglow associated with commercial scintillator films can introduce image lag, leading to substantial artifacts in reconstructed images, especially when the detector is operated at several hundred frames/second (fps). For high speed imaging systems, high-speed readout electronics and fast scintillator films are required. This paper presents an approach to developing a high-speed CT detector based on a novel, back-thinned electron-multiplying CCD (EMCCD) coupled to various bright, high resolution, low afterglow films. The EMCCD camera, when operated in its binned mode, is capable of acquiring data at up to 300 fps with reduced imaging area. CsI:Tl,Eu and ZnSe:Te films, recently fabricated at RMD, apart from being bright, showed very good afterglow properties, favorable for high-speed imaging. Since ZnSe:Te films were brighter than CsI:Tl,Eu films, for preliminary experiments a ZnSe:Te film was coupled to an EMCCD camera at UC Davis Medical Center. A high-throughput tungsten anode X-ray generator was used, as the X-ray fluence from a mini- or micro-focus source would be insufficient to achieve high-speed imaging. A euthanized mouse held in a glass tube was rotated 360 degrees in less than 3 seconds, while radiographic images were recorded at various readout rates (up to 300 fps); images were reconstructed using a conventional Feldkamp cone-beam reconstruction algorithm. We have found that this system allows volumetric CT imaging of small animals in approximately two seconds at ~110 to 190 μm resolution, compared to several minutes at 160 μm resolution needed for the best current systems.

  9. On the Reported Death of the MACHO Era

    CERN Document Server

    Quinn, D P; Irwin, M J; Marshall, J; Koch, A; Belokurov, V

    2009-01-01

    We present radial velocity measurements of four wide halo binary candidates from the sample in Chaname & Gould (2004; CG04) which, to date, is the only sample containing a large number of such candidates. The four candidates that we have observed have projected separations >0.1 pc, and include the two widest binaries from the sample, with separations of 0.45 and 1.1 pc. We confirm that three of the four CG04 candidates are genuine, including the one with the largest separation. The fourth candidate, however, is spurious at the 5-sigma level. In the light of these measurements we re-examine the implications for MACHO models of the Galactic halo. Our analysis casts doubt on what MACHO constraints can be drawn from the existing sample of wide halo binaries.

  10. A novel virtual four-ocular stereo vision system based on single camera for measuring insect motion parameters

    Institute of Scientific and Technical Information of China (English)

    Ying Wang; Guangjun Zhang; Dazhi Chen

    2005-01-01

    A novel virtual four-ocular stereo measurement system based on single high speed camera is proposed for measuring double beating wings of a high speed flapping insect. The principle of virtual monocular system consisting of a few planar mirrors and a single high speed camera is introduced. The stereo vision measurement principle based on optic triangulation is explained. The wing kinematics parameters are measured. Results show that this virtual stereo system not only decreases system cost extremely but also is effective to insect motion measurement.

  11. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  12. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    Science.gov (United States)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  13. IVOA Compliant Services for the MACHO Data Archive

    CERN Document Server

    Smillie, Jonathan G

    2009-01-01

    The MACHO Project generated two-colour photometric lightcurves for 73 million stars in the LMC, SMC, and the Galactic bulge during its 8 years of observing. This photometry, along with all images from the over 100 thousand observations from which it was derived, and an associated catalogue of 21 thousand LMC variable stars, is now available via web-services which comply with standards defined by the International Virtual Observatory Alliance (IVOA).

  14. Not enough stellar Mass Machos in the Galactic Halo

    CERN Document Server

    Lasserre, T; Albert, J N; Andersen, J; Ansari, R; Aubourg, E; Bareyre, P; Bauer, F; Beaulieu, J P; Blanc, G; Bouquet, A; Char, S; Charlot, X; Couchot, F; Coutures, C; Derue, F; Ferlet, R; Glicenstein, J F; Goldman, B; Gould, A; Graff, D; Gros, M H; Haïssinski, J; Hamilton, J C; Hardin, D; De Kat, J; Kim, A; Lesquoy, E; Loup, C; Magneville, C; Mansoux, B; Marquette, J B; Maurice, E; Milshtein, A I; Moniez, M; Palanque-Delabrouille, Nathalie; Perdereau, O; Prévôt, L; Regnault, N; Rich, J; Spiro, Michel; Vidal-Madjar, A; Vigroux, L; Zylberajch, S

    2000-01-01

    We combine new results from the search for microlensing towards the LargeMagellanic Cloud (LMC) by EROS2 (Experience de Recherche d'Objets Sombres) withlimits previously reported by EROS1 and EROS2 towards both Magellanic Clouds.The derived upper limit on the abundance of stellar mass MACHOs rules out suchobjects as an important component of the Galactic halo if their mass is smallerthan 1 solar mass.

  15. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    Science.gov (United States)

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  16. Automated Degradation Diagnosis in Character Recognition System Subject to Camera Vibration

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2014-01-01

    Full Text Available Degradation diagnosis plays an important role for degraded character processing, which can tell the recognition difficulty of a given degraded character. In this paper, we present a framework for automated degraded character recognition system by statistical syntactic approach using 3D primitive symbol, which is integrated by degradation diagnosis to provide accurate and reliable recognition results. Our contribution is to design the framework to build the character recognition submodels corresponding to degradation subject to camera vibration or out of focus. In each character recognition submodel, statistical syntactic approach using 3D primitive symbol is proposed to improve degraded character recognition performance. In the experiments, we show attractive experimental results, highlighting the system efficiency and recognition performance by statistical syntactic approach using 3D primitive symbol on the degraded character dataset.

  17. Development of engineering model of medium-sized aperture camera system

    Science.gov (United States)

    Kim, Ee-Eul; Choi, Young-Wan; Soon Yang, Ho; Kang, Myung-Seok; Jeong, Seong-Keun; Yang, Seung-Uk; Kim, Jong-Un; Rasheed, Ad. Aziz Ad.; Nasir, Hafizah Md.; Rosdi, Md. Rushdan Md.; Hai, Asma Hani Ad.; Ismail, Ismahadi; Sabirin Arshad, Ahmad

    2005-01-01

    SaTReC i and ATSB are developing medium-sized aperture camera (MAC) system for earth observation. Following the first model, the development of the engineering model (EM) was completed. The optical subsystem incorporates a conventional approach of using low-expansion optical and structural materials. It is a 300-mm on-axis system with two aspheric mirrors and two field correction lenses. It has five linear detectors aligned on its focal plane together with proximity electronics. The electronics subsystem consists of five modules; two for management and control in cold redundancy, two for image data storage and one for power supply. EM was developed to have a storage capacity of 16 Gbits, which can be easily increased to 32 Gbits by adding memory packs for following models. EM weighs about 41.9 kg and consumes about 45.4 W of peak power.

  18. Design of motion adjusting system for space camera based on ultrasonic motor

    Science.gov (United States)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  19. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    Science.gov (United States)

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  20. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  1. Underwater camera with depth measurement

    Science.gov (United States)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  2. Laboratory characterization of a CCD camera system for retrieval of bi-directional reflectance distribution function

    Science.gov (United States)

    Nandy, Prabal; Thome, Kurtis J.; Biggar, Stuart F.

    1999-12-01

    The Remote Sensing Group of the Optical Science Center at the University of Arizona has developed a four-band, multi- spectral, wide-angle, imaging radiometer for the retrieval of the bi-directional reflectance distribution function (BRDF) for vicarious calibration applications. The system consists of a fisheye lens with four interference filters centered at 470 nm, 575 nm, 660 nm, and 835 nm for spectral selection and an astronomical grade 1024 X 1024-pixel, silicon CCD array. Data taken by the system fit in the array as a nominally 0.2 degree per pixel image. This imaging radiometer system has been used in support of the calibration of Landsat-5 and SPOT- satellite sensors. This paper presents the results of laboratory characterization of the system to determine linearity of the detector, point spread function (PSF) and polarization effects. The linearity study was done on detector array without the lens, using a spherical-integrating source with a 1.5-mm aperture. This aperture simulates a point source for distances larger than 60 cm. Data were collected as both a function of exposure time and distance from the source. The results of these measurements indicate that each detector of the array is linear to better than 0.5%. Assuming a quadratic response improves this fit to better than 0.1% over 88% of the upper end of the detector's dynamic range. The point spread function (PSF) of the lens system was measured using the sphere source and aperture with the full camera system operated at a distance of 700 mm from the source, thus the aperture subtends less than the field of view of one pixel. The PSF was measured for several field angles and the signal level was found to fall to less than 1% of the peak signal within 1.5-degrees (10 pixels) for the on-axis case. The effect of this PSF on the retrieval of modeled BRDFs is shown to be less than 0.2% out to view angles of 70 degrees. The final test presented is one to assess the polarization effects of the lens

  3. Navigation system for a small size lunar exploration rover with a monocular omnidirectional camera

    Science.gov (United States)

    Laîné, Mickaël.; Cruciani, Silvia; Palazzolo, Emanuele; Britton, Nathan J.; Cavarelli, Xavier; Yoshida, Kazuya

    2016-07-01

    A lunar rover requires an accurate localisation system in order to operate in an uninhabited environment. However, every additional piece of equipment mounted on it drastically increases the overall cost of the mission. This paper reports a possible solution for a micro-rover using a sole monocular omnidirectional camera. Our approach relies on a combination of feature tracking and template matching for Visual Odometry. The results are afterwards refined using a Graph-Based SLAM algorithm, which also provides a sparse reconstruction of the terrain. We tested the algorithm on a lunar rover prototype in a lunar analogue environment and the experiments show that the estimated trajectory is accurate and the combination with the template matching algorithm allows an otherwise poor detection of spot turns.

  4. A correction method of the spatial distortion in planar images from γ-Camera systems

    Science.gov (United States)

    Thanasas, D.; Georgiou, E.; Giokaris, N.; Karabarbounis, A.; Maintas, D.; Papanicolas, C. N.; Polychronopoulou, A.; Stiliaris, E.

    2009-06-01

    A methodology for correcting spatial distortions in planar images for small Field Of View (FOV) γ-Camera systems based on Position Sensitive Photomultiplier Tubes (PSPMT) and pixelated scintillation crystals is described. The process utilizes a correction matrix whose elements are derived from a prototyped planar image obtained through irradiation of the scintillation crystal by a 60Co point source and without a collimator. The method was applied to several planar images of a SPECT experiment with a simple phantom construction at different detection angles. The tomographic images are obtained using the Maximum-Likelihood Expectation-Maximization (MLEM) reconstruction technique. Corrected and uncorrected images are compared and the applied correction methodology is discussed.

  5. Selecting among competing models of electro-optic, infrared camera system range performance

    Science.gov (United States)

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  6. A smart camera based traffic enforcement system: experiences from the field

    Science.gov (United States)

    Sidla, Oliver; Loibner, Gernot

    2013-03-01

    The observation and monitoring of traffic with smart vision systems for the purpose of improving traffic safety has a big potential. Embedded vision systems can count vehicles and estimate the state of traffic along the road, they can supplement or replace loop sensors with their limited local scope, radar which measures the speed, presence and number of vehicles. This work presents a vision system which has been built to detect and report traffic rule violations at unsecured railway crossings which pose a threat to drivers day and night. Our system is designed to detect and record vehicles passing over the railway crossing after the red light has been activated. Sparse optical flow in conjunction with motion clustering is used for real-time motion detection in order to capture these safety critical events. The cameras are activated by an electrical signal from the railway when the red light turns on. If they detect a vehicle moving over the stopping line, and it is well over this limit, an image sequence will be recorded and stored onboard for later evaluation. The system has been designed to be operational in all weather conditions, delivering human-readable license plate images even under the worst illumination conditions like direct incident sunlight direct view into or vehicle headlights. After several months of operation in the field we can report on the performance of the system, its hardware implementation as well as the implementation of algorithms which have proven to be usable in this real-world application.

  7. System Configuration and Operation Plan of Hayabusa2 DCAM3-D Camera System for Scientific Observation During SCI Impact Experiment

    Science.gov (United States)

    Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime

    2017-07-01

    An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.

  8. System Configuration and Operation Plan of Hayabusa2 DCAM3-D Camera System for Scientific Observation During SCI Impact Experiment

    Science.gov (United States)

    Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime

    2017-03-01

    An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.

  9. Cooling the dark energy camera CCD array using a closed-loop two-phase liquid nitrogen system

    Science.gov (United States)

    Cease, H.; DePoy, D.; Derylo, G.; Diehl, H. T.; Estrada, J.; Flaugher, B.; Kuk, K.; Kuhlmann, S.; Lathrop, A.; Schultz, K.; Reinert, R. J.; Schmitt, R. L.; Stefanik, A.; Zhao, A.

    2010-07-01

    The Dark Energy Camera (DECam) is the new wide field prime-focus imager for the Blanco 4m telescope at CTIO. This instrument is a 3 sq. deg. camera with a 45 cm diameter focal plane consisting of 62 2k × 4k CCDs and 12 2k × 2k CCDs and was developed for the Dark Energy Survey that will start operations at CTIO in 2011. The DECam CCD array is inside the imager vessel. The focal plate is cooled using a closed loop liquid nitrogen system. As part of the development of the mechanical and cooling design, a full scale prototype imager vessel has been constructed and is now being used for Multi-CCD readout tests. The cryogenic cooling system and thermal controls are described along with cooling results from the prototype camera. The cooling system layout on the Blanco telescope in Chile is described.

  10. CCD digital camera system for measuring curvature and ovalization of each cross-section of circular tube under cyclic bending

    National Research Council Canada - National Science Library

    Lee, Kuo-Long; Hung, Chao-Yu; Pan, Wen-Fung

    2011-01-01

    .... To test the capability of this newly designed measurement system, a tube-bending machine was employed to test, experimentally, a 7005-T53 aluminum alloy tube under cyclic bending, and the CCD digital camera system was utilized to measure the curvature and each cross-sectional ovalization of the tube.

  11. A robust photometric calibration framework for projector-camera display system

    Institute of Scientific and Technical Information of China (English)

    Wenhai Zou; Haisong Xu

    2009-01-01

    A novel photometric calibration framework is presented for a projector-camera (ProCam) display system,which is currently under booming development.Firstly,a piccewise bilinear model and five 5-ary color coding images are used to construct the homography between the image planes of a projector and a camcra.Secondly,a photometric model is proposed to describe the data flow of the ProCam display system for displaying color images on colored surface in a general way. An efficient self-calibration algorithm is correspondingly put forward to recover the model parameters.Aiming to adapt this algorithm to different types of ProCam display system robustly,a 3×7 masking coupling matrix and a patches image with 1024 color samples are adopted to fit the complex channel interference function of the display system.Finally,the experimental results demonstrate the validity and superiority of this calibration algorithm for the ProCam display system.

  12. Camera system considerations for geomorphic applications of SfM photogrammetry

    Science.gov (United States)

    Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John

    2017-01-01

    The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights

  13. Light field sensor and real-time panorama imaging multi-camera system and the design of data acquisition

    Science.gov (United States)

    Lu, Yu; Tao, Jiayuan; Wang, Keyi

    2014-09-01

    Advanced image sensor and powerful parallel data acquisition chip can be used to collect more detailed and comprehensive light field information. Using multiple single aperture and high resolution sensor record light field data, and processing the light field data real time, we can obtain wide field-of-view (FOV) and high resolution image. Wide FOV and high-resolution imaging has promising application in areas of navigation, surveillance and robotics. Qualityenhanced 3D rending, very high resolution depth map estimation, high dynamic-range and other applications we can obtained when we post-process these large light field data. The FOV and resolution are contradictions in traditional single aperture optic imaging system, and can't be solved very well. We have designed a multi-camera light field data acquisition system, and optimized each sensor's spatial location and relations. It can be used to wide FOV and high resolution real-time image. Using 5 megapixel CMOS sensors, and field programmable Gate Array (FPGA) acquisition light field data, paralleled processing and transmission to PC. A common clock signal is distributed to all of the cameras, and the precision of synchronization each camera achieved 40ns. Using 9 CMOSs build an initial system and obtained high resolution 360°×60° FOV image. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. In the system we used high speed dedicated camera interface CameraLink for system data transfer. The detail of the hardware architecture, its internal blocks, the algorithms, and the device calibration procedure are presented, along with imaging results.

  14. Monitoring system for isolated limb perfusion based on a portable gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Orero, A.; Muxi, A.; Rubi, S.; Duch, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Vidal-Sicart, S.; Pons, F. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); Red Tematica de Investigacion Cooperativa en Cancer (RTICC), Barcelona (Spain); Roe, N. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); Rull, R. [Servei de Cirurgia, Hospital Clinic, Barcelona (Spain); Pavon, N. [Inst. de Fisica Corpuscular, CSIC - UV, Valencia (Spain); Pavia, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain)

    2009-07-01

    Background: The treatment of malignant melanoma or sarcomas on a limb using extremity perfusion with tumour necrosis factor (TNF-{alpha}) and melphalan can result in a high degree of systemic toxicity if there is any leakage from the isolated blood territory of the limb into the systemic vascular territory. Leakage is currently controlled by using radiotracers and heavy external probes in a procedure that requires continuous manual calculations. The aim of this work was to develop a light, easily transportable system to monitor limb perfusion leakage by controlling systemic blood pool radioactivity with a portable gamma camera adapted for intraoperative use as an external probe, and to initiate its application in the treatment of MM patients. Methods: A special collimator was built for maximal sensitivity. Software for acquisition and data processing in real time was developed. After testing the adequacy of the system, it was used to monitor limb perfusion leakage in 16 patients with malignant melanoma to be treated with perfusion of TNF-{alpha} and melphalan. Results: The field of view of the detector system was 13.8 cm, which is appropriate for the monitoring, since the area to be controlled was the precordial zone. The sensitivity of the system was 257 cps/MBq. When the percentage of leakage reaches 10% the associated absolute error is {+-}1%. After a mean follow-up period of 12 months, no patients have shown any significant or lasting side-effects. Partial or complete remission of lesions was seen in 9 out of 16 patients (56%) after HILP with TNF-{alpha} and melphalan. Conclusion: The detector system together with specially developed software provides a suitable automatic continuous monitoring system of any leakage that may occur during limb perfusion. This technique has been successfully implemented in patients for whom perfusion with TNF-{alpha} and melphalan has been indicated. (orig.)

  15. Mechanics and cooling system for the camera of the Large Size Telescopes of the Cherenkov Telescope Array (CTA)

    CERN Document Server

    Delgado, Carlos; Diaz, Carlos; Hamer, Noemi; Hideyuki, Ohoka; Mirzoyan, Razmik; Teshima, Masahiro; Wetteskind, Holger

    2013-01-01

    Mechanics of the camera for the large size telescopes of CTA must protect and provide a stable environment for its instrumentation. This is achieved by a stiff support structure enclosed in an air and water tight volume. The structure is specially devised to facilitate extracting the power dissipated by the focal plane electronics while keeping its weight small enough to guarantee an optimum load on the telescope structure. A heat extraction system is designed to keep the electronics temperature within its optimal operation range, stable in time and homogeneous along the camera volume, whereas it is decoupled from the temperature in the telescope environment. In this contribution, we present the details of this system as well as its verification based in finite element analysis computations and tested prototypes. Finally, issues related to the integration of the camera mechanics and electronics will be dealt with.

  16. Twente Optical Perfusion Camera: system overview and performance for video rate laser Doppler perfusion imaging

    NARCIS (Netherlands)

    M. Draijer; E. Hondebrink; T. van Leeuwen; W. Steenbergen

    2009-01-01

    We present the Twente Optical Perfusion Camera (TOPCam), a novel laser Doppler Perfusion Imager based on CMOS technology. The tissue under investigation is illuminated and the resulting dynamic speckle pattern is recorded with a high speed CMOS camera. Based on an overall analysis of the signal-to-n

  17. Digital Oblique Aerial Cameras (1): A Survey of Features and Systems

    NARCIS (Netherlands)

    Lemmens, M.J.P.M.

    2014-01-01

    It has become customary for me to provide a survey article on digital aerial cameras in the April issue of GIM International every three years. The previous survey (April 2011; vol. 25:4) addressed small, medium and large-format cameras, while in April 2008 (vol. 22:4) the focus was on sensor archit

  18. Twente Optical Perfusion Camera: system overview and performance for video rate laser Doppler perfusion imaging

    NARCIS (Netherlands)

    Draijer, M.; Hondebrink, E.; van Leeuwen, T.; Steenbergen, W.

    2009-01-01

    We present the Twente Optical Perfusion Camera (TOPCam), a novel laser Doppler Perfusion Imager based on CMOS technology. The tissue under investigation is illuminated and the resulting dynamic speckle pattern is recorded with a high speed CMOS camera. Based on an overall analysis of the

  19. Secondary caries detection with a novel fluorescence-based camera system in vitro

    Science.gov (United States)

    Brede, Olivier; Wilde, Claudia; Krause, Felix; Frentzen, Matthias; Braun, Andreas

    2010-02-01

    The aim of the study was to assess the ability of a fluorescence based optical system to detect secondary caries. The optical detecting system (VistaProof) illuminates the tooth surfaces with blue light emitted by high power GaN-LEDs at 405 nm. Employing this almost monochromatic excitation, fluorescence is analyzed using a RGB camera chip and encoded in color graduations (blue - red - orange - yellow) by a software (DBSWIN), indicating the degree of caries destruction. 31 freshly extracted teeth with existing fillings and secondary caries were cleaned, excavated and refilled with the same kind of restorative material. 19 of them were refilled with amalgam, 12 were refilled with a composite resin. Each step was analyzed with the respective software and analyzed statistically. Differences were considered as statistically significant at p0.05). There was a significant difference between baseline measurements of the teeth primarily filled with composite resins and the refilled situation (p=0.014). There was also a significant difference between the non-excavated and the excavated group (Composite p=0.006, Amalgam p=0.018). The in vitro study showed, that the fluorescence based system allows detecting secondary caries next to composite resin fillings but not next to amalgam restorations. Cleaning of the teeth is not necessary, if there is no visible plaque. Further studies have to show, whether the system shows the same promising results in vivo.

  20. An Incremental Target-Adapted Strategy for Active Geometric Calibration of Projector-Camera Systems

    Directory of Open Access Journals (Sweden)

    Hsiang-Jen Chien

    2013-02-01

    Full Text Available The calibration of a projector-camera system is an essential step toward accurate 3-D measurement and environment-aware data projection applications, such as augmented reality. In this paper we present a two-stage easy-to-deploy strategy for robust calibration of both intrinsic and extrinsic parameters of a projector. Two key components of the system are the automatic generation of projected light patterns and the incremental calibration process. Based on the incremental strategy, the calibration process first establishes a set of initial parameters, and then it upgrades these parameters incrementally using the projection and captured images of dynamically-generated calibration patterns. The scene-driven light patterns allow the system to adapt itself to the pose of the calibration target, such that the difficulty in feature detection is greatly lowered. The strategy forms a closed-loop system that performs self-correction as more and more observations become available. Compared to the conventional method, which requires a time-consuming process for the acquisition of dense pixel correspondences, the proposed method deploys a homography-based coordinate computation, allowing the calibration time to be dramatically reduced. The experimental results indicate that an improvement of 70% in reprojection errors is achievable and 95% of the calibration time can be saved.

  1. A Foot-Arch Parameter Measurement System Using a RGB-D Camera

    Directory of Open Access Journals (Sweden)

    Sungkuk Chun

    2017-08-01

    Full Text Available The conventional method of measuring foot-arch parameters is highly dependent on the measurer’s skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the foot and yields three foot-arch parameters: arch index (AI, arch width (AW and arch height (AH. The proposed system captures 3D geometric and color data on the plantar surface of the foot in a static standing pose using a commercial RGB-D camera. It detects the region of the foot surface in contact with the footplate by applying the clustering and Markov random field (MRF-based image segmentation methods. The system computes the foot-arch parameters by analyzing the 2/3D shape of the contact region. Validation experiments were carried out to assess the accuracy and repeatability of the system. The average errors for AI, AW, and AH estimation on 99 data collected from 11 subjects during 3 days were −0.17%, 0.95 mm, and 0.52 mm, respectively. Reliability and statistical analysis on the estimated foot-arch parameters, the robustness to the change of weights used in the MRF, the processing time were also performed to show the feasibility of the system.

  2. A 90GHz Bolometer Camera Detector System for the Green Bank Telescope

    Science.gov (United States)

    Benford, Dominic J.; Allen, Christine A.; Buchanan, Ernest D.; Chen, Tina C.; Chervenak, James A.; Devlin, Mark J.; Dicker, Simon R.; Forgione, Joshua B.

    2004-01-01

    We describe a close-packed, two-dimensional imaging detector system for operation at 90GHz (3.3mm) for the 100 m Green Bank Telescope (GBT) This system will provide high sensitivity (<1mjy in 1s rapid imaging (15'x15' to 250 microJy in 1 hr) at the world's largest steerable aperture. The heart of this camera is an 8x8 close packed, Nyquist-sampled array of superconducting transition edge sensor bolometers. We have designed and are producing a functional superconducting bolometer array system using a monolithic planar architecture and high-speed multiplexed readout electronics. With an NEP of approx. 2.10(exp 17) W/square root Hz, the TES bolometers will provide fast linear sensitive response for high performance imaging. The detectors are read out by and 8x8 time domain SQUID multiplexer. A digital/analog electronics system has been designed to enable read out by SQUID multiplexers. First light for this instrument on the GBT is expected within a year.

  3. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  4. Ringfield lithographic camera

    Science.gov (United States)

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  5. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    Science.gov (United States)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  6. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    Science.gov (United States)

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  7. Distributed System for 3D Remote Monitoring Using KINECT Depth Cameras

    Directory of Open Access Journals (Sweden)

    M. Martinez-Zarzuela

    2014-01-01

    Full Text Available This article describes the design and development ofa system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in self-coding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to conveniently merge the skeletons detected locally by each Kinect, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  8. Lane Departure System Design using with IR Camera for Night-time Road Conditions

    Directory of Open Access Journals (Sweden)

    Osman Onur Akırmak

    2015-02-01

    Full Text Available Today, one of the largest areas of research and development in the automobile industry is road safety. Many deaths and injuries occur every year on public roads from accidents caused by sleepy drivers, that technology could have been used to prevent. Lane detection at night-time is an important issue in driving assistance systems. This paper deals with vision-based lane detection and tracking at night-time. This project consists of a research and development of an algorithm for automotive systems to detect the departure of vehicle from out of lane. Once the situation is detected, a warning is issued to the driver with sound and visual message through “Head Up Display” (HUD system. The lane departure is detected through the images obtained from a single IR camera, which identifies the departure at a satisfactory accuracy via improved quality of video stream. Our experimental results and accuracy evaluation show that our algorithm has good precision and our detecting method is suitable for night-time road conditions.

  9. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    Directory of Open Access Journals (Sweden)

    M. Simi

    2012-01-01

    Full Text Available A novel compliant Magnetic Levitation System (MLS for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The head module incorporates two motorized donut-shaped magnets and a miniaturized vision system at the tip. The compliant MLS can exploit the static external magnetic field to induce a smooth bending of the robotic head (0–80°, guaranteeing a wide span tilt motion of the point of view. A nonlinear mathematical model for compliant beam was developed and solved analytically in order to describe and predict the trajectory behaviour of the system for different structural parameters. The entire device is 95 mm long and 12.7 mm in diameter. Use of such a robot in single port or standard multiport laparoscopy could enable a reduction of the number or size of ancillary trocars, or increase the number of working devices that can be deployed, thus paving the way for multiple view point laparoscopy.

  10. An upgraded camera-based imaging system for mapping venous blood oxygenation in human skin tissue

    Science.gov (United States)

    Li, Jun; Zhang, Xiao; Qiu, Lina; Leotta, Daniel F.

    2016-07-01

    A camera-based imaging system was previously developed for mapping venous blood oxygenation in human skin. However, several limitations were realized in later applications, which could lead to either significant bias in the estimated oxygen saturation value or poor spatial resolution in the map of the oxygen saturation. To overcome these issues, an upgraded system was developed using improved modeling and image processing algorithms. In the modeling, Monte Carlo (MC) simulation was used to verify the effectiveness of the ratio-to-ratio method for semi-infinite and two-layer skin models, and then the relationship between the venous oxygen saturation and the ratio-to-ratio was determined. The improved image processing algorithms included surface curvature correction and motion compensation. The curvature correction is necessary when the imaged skin surface is uneven. The motion compensation is critical for the imaging system because surface motion is inevitable when the venous volume alteration is induced by cuff inflation. In addition to the modeling and image processing algorithms in the upgraded system, a ring light guide was used to achieve perpendicular and uniform incidence of light. Cross-polarization detection was also adopted to suppress surface specular reflection. The upgraded system was applied to mapping of venous oxygen saturation in the palm, opisthenar and forearm of human subjects. The spatial resolution of the oxygenation map achieved is much better than that of the original system. In addition, the mean values of the venous oxygen saturation for the three locations were verified with a commercial near-infrared spectroscopy system and were consistent with previously published data.

  11. A Novel System for Non-Invasive Method of Animal Tracking and Classification in Designated Area Using Intelligent Camera System

    Directory of Open Access Journals (Sweden)

    S. Matuska

    2016-04-01

    Full Text Available This paper proposed a novel system for non-invasive method of animal tracking and classification in designated area. The system is based on intelligent devices with cameras, which are situated in a designated area and a main computing unit (MCU acting as a system master. Intelligent devices track animals and then send data to MCU to evaluation. The main purpose of this system is detection and classification of moving animals in a designated area and then creation of migration corridors of wild animals. In the intelligent devices, background subtraction method and CAMShift algorithm are used to detect and track animals in the scene. Then, visual descriptors are used to create representation of unknown objects. In order to achieve the best accuracy in classification, key frame extraction method is used to filtrate an object from detection module. Afterwards, Support Vector Machine is used to classify unknown moving animals.

  12. Machos y brujas en la Patagonia”. Trabajo, masculinidad y espacio de la reproducción

    Directory of Open Access Journals (Sweden)

    Hernán M. Palermo

    2016-05-01

    Full Text Available The title "Machos and Witches in Patagonia" illustrates the core debate of this article and attempts to problematize the processes emphasized by social class and gender relations. "Machos" refers to the social construction of the oil worker as imposed by the production system. Meanwhile, "witches” refers to the witch-hunts of the sixteenth, seventeenth and eighteenth centuries, and the practice in which women were confined to the domestic sphere and their reproductive role of the workforce. Therefore, we want to analyze the link between the production process of the oil workforce inside the production system and outside of their work experiences in Comodoro Rivadavia, Argentinian Patagonia. Given that masculinity and femininity are relational gender positions, we will discuss the role of men and women in the organization of work, as a whole.

  13. Diabetic foot ulcer mobile detection system using smart phone thermal camera: a feasibility study.

    Science.gov (United States)

    Fraiwan, Luay; AlKhodari, Mohanad; Ninan, Jolu; Mustafa, Basil; Saleh, Adel; Ghazal, Mohammed

    2017-10-03

    Nowadays, the whole world is being concerned with a major health problem, which is diabetes. A very common symptom of diabetes is the diabetic foot ulcer (DFU). The early detection of such foot complications can protect diabetic patients from any dangerous stages that develop later and may require foot amputation. This work aims at building a mobile thermal imaging system that can be used as an indicator for possible developing ulcers. The proposed system consists of a thermal camera connected to a Samsung smart phone, which is used to acquire thermal images. This thermal imaging system has a simulated temperature gradient of more than 2.2 °C, which represents the temperature difference (in the literature) than can indicate a possible development of ulcers. The acquired images are processed and segmented using basic image processing techniques. The analysis and interpretation is conducted using two techniques: Otsu thresholding technique and Point-to-Point mean difference technique. The proposed system was implemented under MATLAB Mobile platform and thermal images were analyzed and interpreted. Four testing images (feet images) were used to test this procedure; one image with any temperature variation to the feet, and three images with skin temperature increased to more than 2.2 °C introduced at different locations. With the two techniques applied during the analysis and interpretation stage, the system was successful in identifying the location of the temperature increase. This work successfully implemented a mobile thermal imaging system that includes an automated method to identify possible ulcers in diabetic patients. This may give diabetic patients the ability for a frequent self-check of possible ulcers. Although this work was implemented in simulated conditions, it provides the necessary feasibility to be further developed and tested in a clinical environment.

  14. Calibration of line structured light vision system based on camera's projective center

    Institute of Scientific and Technical Information of China (English)

    ZHU Ji-gui; LI Yan-jun; YE Sheng-hua

    2005-01-01

    Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera's projective center and the light's information in the camera's image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method.

  15. A double photomultiplier Compton camera and its readout system for mice imaging

    Science.gov (United States)

    Fontana, Cristiano Lino; Atroshchenko, Kostiantyn; Baldazzi, Giuseppe; Bello, Michele; Uzunov, Nikolay; Di Domenico, Giovanni Di

    2013-04-01

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the "electronic collimation", i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a "cone" of possible incident directions are obtained (event with "incomplete geometry"). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  16. Development of X-ray CCD camera based X-ray micro-CT system.

    Science.gov (United States)

    Sarkar, Partha S; Ray, N K; Pal, Manoj K; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y; Sinha, A; Gadkari, S C

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  17. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-08-31

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  18. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  19. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  20. The use of detective quantum efficiency (DQE) in evaluating the performance of gamma camera systems

    Energy Technology Data Exchange (ETDEWEB)

    Starck, Sven-Ake [Department of Hospital Physics, County Hospital Ryhov, Joenkoeping (Sweden); Department of Radiation Physics, Goeteborg University, Goeteborg (Sweden); Baath, Magnus [Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Goeteborg (Sweden); Department of Radiation Physics, Goeteborg University, Goeteborg (Sweden); Carlsson, Sten [Department of Radiology, Uddevalla Hospital, Uddevalla (Sweden); Department of Radiation Physics, Goeteborg University, Goeteborg (Sweden)

    2005-04-07

    The imaging properties of an imaging system can be described by its detective quantum efficiency (DQE). Using the modulation transfer function calculated from measured line spread functions and the normalized noise power spectrum calculated from uniformity images, DQE was calculated with the number of photons emitted from a plane source as a measure for the incoming SNR{sup 2}. Measurements were made with {sup 99m}Tc, using three different pulse height windows at 2 cm and 12 cm depths in water with high resolution and all purpose collimators and with two different crystal thicknesses. The results indicated that at greater depths a 15% window is the best choice. The choice of collimator depends on the details in the organ being investigated. There is a break point at 0.5 cycles cm{sup -1} and 1.2 cycles cm{sup -1} at 12 cm and 2 cm depths, respectively. A difference was found in DQE between the two crystal thicknesses, with a slightly better result for the thick crystal for measurements at 12 cm depth. At 2 cm depth, the thinner crystal was slightly better for frequencies over 0.5 cm{sup -1}. The determination of DQE could be a method to optimize the parameters for different nuclear medicine investigations. The DQE could also be used in comparing different gamma camera systems with different collimators to obtain a figure of merit.

  1. Development of X-ray CCD camera based X-ray micro-CT system

    Science.gov (United States)

    Sarkar, Partha S.; Ray, N. K.; Pal, Manoj K.; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y.; Sinha, A.; Gadkari, S. C.

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  2. Real-time people counting system using a single video camera

    Science.gov (United States)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  3. The globular cluster system in NGC5866: optical observations from HST Advanced Camera for Surveys

    CERN Document Server

    Cantiello, Michele; Raimondo, Gabriella

    2007-01-01

    We perform a detailed study of the Globular Cluster (GC) system in the galaxy NGC5866 based on F435W, F555W, and F625W (~ B, V, and R) HST Advanced Camera for Surveys images. Adopting color, size and shape selection criteria, the final list of GC candidates comprises 109 objects, with small estimated contamination from background galaxies, and foreground stars. The color distribution of the final GC sample has a bimodal form. Adopting color to metallicity transformations derived from the Teramo--SPoT simple stellar population model, we estimate a metallicity [Fe/H]~ -1.5, and -0.6 dex for the blue and red peaks, respectively. A similar result is found if the empirical color-metallicity relations derived from Galactic GCs data are used. The two subpopulations show some of the features commonly observed in the GC system of other galaxies, like a ``blue tilt'', higher central concentrations of the red subsystem, and larger half--light radii at larger galactocentric distances. However, we do not find evidence of ...

  4. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-01-01

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time. PMID:27589765

  5. Automatic cloud top height determination using a cost-effective time-lapse camera system

    Directory of Open Access Journals (Sweden)

    H. M. Schulz

    2014-03-01

    Full Text Available A new method for the determination of cloud top heights from the footage of a time-lapse camera is presented. Contact points between cloud tops and underlying terrain are automatically detected in the camera image based on differences in the brightness, texture and movement of cloudy and non-cloudy areas. The height of the detected cloud top positions is determined by comparison with a digital elevation model projected to the view of the camera. The technique has been validated using data about the cloud immersion of a second camera as well as via visual assessment. The validation shows a high detection quality, especially regarding the requirements for the validation of satellite cloud top retrieval algorithms.

  6. Non-Metric CCD Camera Calibration Algorithm in a Digital Photogrammetry System

    Institute of Scientific and Technical Information of China (English)

    YANG Hua-chao; DENG Ka-zhong; ZHANG Shu-bi; GUO Guang-li; ZHOU Ming

    2006-01-01

    Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transformation (DLT) and bundle adjustment is proposed. The proposed approach assumes that the camera interior orientation elements are known, and addresses a new closed form solution in planar object space based on homogenous coordinate representation and matrix factorization. Homogeneous coordinate representation offers a direct matrix correspondence between the parameters of the 2D DLT and the collinearity equation. The matrix factorization starts by recovering the elements of the rotation matrix and then solving for the camera position with the collinearity equation. Camera calibration with high precision is addressed by bundle adjustment using the initial values of the camera orientation elements. The results show that the calibration precision of principal point and focal length is about 0.2 and 0.3 pixels respectively, which can meet the requirements of close-range photogrammetry with high accuracy.

  7. Scintillator-CCD camera system light output response to dosimetry parameters for proton beam range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Daftari, Inder K., E-mail: idaftari@radonc.ucsf.edu [Department of Radiation Oncology, 1600 Divisadero Street, Suite H1031, University of California-San Francisco, San Francisco, CA 94143 (United States); Castaneda, Carlos M.; Essert, Timothy [Crocker Nuclear Laboratory,1 Shields Avenue, University of California-Davis, Davis, CA 95616 (United States); Phillips, Theodore L.; Mishra, Kavita K. [Department of Radiation Oncology, 1600 Divisadero Street, Suite H1031, University of California-San Francisco, San Francisco, CA 94143 (United States)

    2012-09-11

    The purpose of this study is to investigate the luminescence light output response in a plastic scintillator irradiated by a 67.5 MeV proton beam using various dosimetry parameters. The relationship of the visible scintillator light with the beam current or dose rate, aperture size and the thickness of water in the water-column was studied. The images captured on a CCD camera system were used to determine optimal dosimetry parameters for measuring the range of a clinical proton beam. The method was developed as a simple quality assurance tool to measure the range of the proton beam and compare it to (a) measurements using two segmented ionization chambers and water column between them, and (b) with an ionization chamber (IC-18) measurements in water. We used a block of plastic scintillator that measured 5 Multiplication-Sign 5 Multiplication-Sign 5 cm{sup 3} to record visible light generated by a 67.5 MeV proton beam. A high-definition digital video camera Moticam 2300 connected to a PC via USB 2.0 communication channel was used to record images of scintillation luminescence. The brightness of the visible light was measured while changing beam current and aperture size. The results were analyzed to obtain the range and were compared with the Bragg peak measurements with an ionization chamber. The luminescence light from the scintillator increased linearly with the increase of proton beam current. The light output also increased linearly with aperture size. The relationship between the proton range in the scintillator and the thickness of the water column showed good linearity with a precision of 0.33 mm (SD) in proton range measurement. For the 67.5 MeV proton beam utilized, the optimal parameters for scintillator light output response were found to be 15 nA (16 Gy/min) and an aperture size of 15 mm with image integration time of 100 ms. The Bragg peak depth brightness distribution was compared with the depth dose distribution from ionization chamber measurements

  8. Intraoperative implant rod three-dimensional geometry measured by dual camera system during scoliosis surgery.

    Science.gov (United States)

    Salmingo, Remel Alingalan; Tadano, Shigeru; Abe, Yuichiro; Ito, Manabu

    2016-05-12

    Treatment for severe scoliosis is usually attained when the scoliotic spine is deformed and fixed by implant rods. Investigation of the intraoperative changes of implant rod shape in three-dimensions is necessary to understand the biomechanics of scoliosis correction, establish consensus of the treatment, and achieve the optimal outcome. The objective of this study was to measure the intraoperative three-dimensional geometry and deformation of implant rod during scoliosis corrective surgery.A pair of images was obtained intraoperatively by the dual camera system before rotation and after rotation of rods during scoliosis surgery. The three-dimensional implant rod geometry before implantation was measured directly by the surgeon and after surgery using a CT scanner. The images of rods were reconstructed in three-dimensions using quintic polynomial functions. The implant rod deformation was evaluated using the angle between the two three-dimensional tangent vectors measured at the ends of the implant rod.The implant rods at the concave side were significantly deformed during surgery. The highest rod deformation was found after the rotation of rods. The implant curvature regained after the surgical treatment.Careful intraoperative rod maneuver is important to achieve a safe clinical outcome because the intraoperative forces could be higher than the postoperative forces. Continuous scoliosis correction was observed as indicated by the regain of the implant rod curvature after surgery.

  9. Integration of a multi-camera vision system and strapdown inertial navigation system (SDINS) with a modified Kalman filter.

    Science.gov (United States)

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free.

  10. A positioning model of a two CCD camera coordinate system with an alternate-four-matrix look-up table algorithm

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Wei, Tzu-Chi; Chen, Wei-Lung; Chang, Chia-Chang

    2010-12-01

    This study proposes a novel positioning model of a two CCD camera coordinate system with an alternate-four-matrix (AFM) look-up table (LUT) algorithm. Two CCD cameras are set on both sides of a large scale screen and used to aid the position measures of targets. The coordinate position of the object in a specified space can be obtained by using different viewing angles of the two cameras and the AFMLUT method. The right camera is in charge of detecting the intermediate block near the right side and the dead zone of the left camera, using the first and the second matrix LUTs. The left camera is in charge of detecting the other parts, using the third and the fourth matrix LUTs. The results indicate that this rapid mapping and four matrix memory allocation method has good accuracy (positioning error <2%) and stability when operating a human-machine interface system.

  11. 基于FPGA的SDI到Camera Link视频接口转换系统设计%Design of Video Interface Conversion System from SDI to Camera Link Based on FPGA

    Institute of Scientific and Technical Information of China (English)

    朱超; 刘艳滢; 董月芳

    2011-01-01

    Aimed at camera with SDI interface output, a video interface conversion system that transforms SDI input into Camera Link output is designed and implemented, and Xilinx Corporation' s Spartan-3E XC3S250E is used as the main control chip. The cable equalizing, data retiming and video decoding circuit of SDI signal, and the data stream de-interweave, storage, color space conversion and the Camera Link timing generator module of FPGA are introduced in detail. Combined with actual application, the SDI video signal output from camera can be input into the frame grabber with the Camera Link interface by this system, which makes video displaying and processing more conveniently.%针对具有SDI接口输出的相机,采用Xilinx公司Spartan-3E系列的XC3S250E作为主控制芯片,设计并实现了由SDI输入到Camera Link输出的视频接口转换系统.详细介绍了SDI信号的电缆均衡、重新定时锁相、解码电路以及FPGA的数据流解交织、存储、彩色空间变换和Camera Link时序发生模块等.该系统结合实际应用,可使相机输出的SDI视频信号经转换后输入到具有Camera Link接口的图像采集卡上,便于图像的显示和处理.

  12. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-10-01

    Full Text Available Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD, is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor based on the observations of the researchers about the difference between real (live and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR camera-based finger-vein recognition system using convolutional neural network (CNN to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA for dimensionality reduction of feature space and support vector machine (SVM for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared

  13. Development of a high-speed H-alpha camera system for the observation of rapid fluctuations in solar flares

    Science.gov (United States)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.; Chen, P. C.

    1988-01-01

    A solid-state digital camera was developed for obtaining H alpha images of solar flares with 0.1 s time resolution. Beginning in the summer of 1988, this system will be operated in conjunction with SMM's hard X-ray burst spectrometer (HXRBS). Important electron time-of-flight effects that are crucial for determining the flare energy release processes should be detectable with these combined H alpha and hard X-ray observations. Charge-injection device (CID) cameras provide 128 x 128 pixel images simultaneously in the H alpha blue wing, line center, and red wing, or other wavelength of interest. The data recording system employs a microprocessor-controlled, electronic interface between each camera and a digital processor board that encodes the data into a serial bitstream for continuous recording by a standard video cassette recorder. Only a small fraction of the data will be permanently archived through utilization of a direct memory access interface onto a VAX-750 computer. In addition to correlations with hard X-ray data, observations from the high speed H alpha camera will also be correlated and optical and microwave data and data from future MAX 1991 campaigns. Whether the recorded optical flashes are simultaneous with X-ray peaks to within 0.1 s, are delayed by tenths of seconds or are even undetectable, the results will have implications on the validity of both thermal and nonthermal models of hard X-ray production.

  14. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  15. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  16. A double photomultiplier Compton camera and its readout system for mice imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, Cristiano Lino [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Padova, Via Marzolo 8, Padova 35131 (Italy); Atroshchenko, Kostiantyn [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Legnaro, Viale dell' Universita 2, Legnaro PD 35020 (Italy); Baldazzi, Giuseppe [Physics Department, University of Bologna, Viale Berti Pichat 6/2, Bologna 40127, Italy and INFN Bologna, Viale Berti Pichat 6/2, Bologna 40127 (Italy); Bello, Michele [INFN Legnaro, Viale dell' Universita 2, Legnaro PD 35020 (Italy); Uzunov, Nikolay [Department of Natural Sciences, Shumen University, 115 Universitetska str., Shumen 9712, Bulgaria and INFN Legnaro, Viale dell' Universita 2, Legnaro PD 35020 (Italy); Di Domenico, Giovanni [Physics Department, University of Ferrara, Via Saragat 1, Ferrara 44122 (Italy) and INFN Ferrara, Via Saragat 1, Ferrara 44122 (Italy)

    2013-04-19

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the 'electronic collimation', i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a 'cone' of possible incident directions are obtained (event with 'incomplete geometry'). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  17. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  18. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  19. Calibration of gamma camera systems for a multicentre European {sup 123}I-FP-CIT SPECT normal database

    Energy Technology Data Exchange (ETDEWEB)

    Tossici-Bolt, Livia [Southampton Univ. Hospitals NHS Trust, Dept. of Medical Physics and Bioengineering, Southampton (United Kingdom); Dickson, John C. [UCLH NHS Foundation Trust and Univ. College London, Institute of Nuclear Medicine, London (United Kingdom); Sera, Terez [Univ. of Szeged, Dept. of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Nijs, Robin de [Rigshospitalet and Univ. of Copenhagen, Neurobiology Research Unit, Copenhagen (Denmark); Bagnara, Maria Claudia [Az. Ospedaliera Universitaria S. Martino, Medical Physics Unit, Genoa (Italy); Jonsson, Cathrine [Karolinska Univ. Hospital, Dept. of Nuclear Medicine, Medical Physics, Stockholm (Sweden); Scheepers, Egon [Univ. of Amsterdam, Dept. of Nuclear Medicine, Academic Medical Centre, Amsterdam (Netherlands); Zito, Felicia [Fondazione IRCCS Granda, Ospedale Maggiore Policlinico, Dept. of Nuclear Medicine, Milan (Italy); Seese, Anita [Univ. of Leipzig, Dept. of Nuclear Medicine, Leipzig (Germany); Koulibaly, Pierre Malick [Univ. of Nice-Sophia Antipolis, Nuclear Medicine Dept., Centre Antoine Lacassagne, Nice (France); Kapucu, Ozlem L. [Gazi Univ., Faculty of Medicine, Dept. of Nuclear Medicine, Ankara (Turkey); Koole, Michel [Univ. Hospital and K.U. Leuven, Nuclear Medicine, Leuven (Belgium); Raith, Maria [Medical Univ. of Vienna, Dept. of Nuclear Medicine, Vienna (Austria); George, Jean [Univ. Catholique Louvain, Nuclear Medicine Division, Mont-Godinne Medical Center, Mont-Godinne (Belgium); Lonsdale, Markus Nowak [Bispebjerg Univ. Hospital, Dept. of Clinical Physiology and Nuclear Medicine, Copenhagen (Denmark); Muenzing, Wolfgang [Univ. of Munich, Dept. of Nuclear Medicine, Munich (Germany); Tatsch, Klaus [Univ. of Munich, Dept. of Nuclear Medicine, Munich (Germany); Municipal Hospital of Karlsruhe Inc., Dept. of Nuclear Medicine, Karlsruhe (Germany); Varrone, Andrea [Center for Psychiatric Research, Karolinska Inst., Dept. of Clinical Neuroscience, Stockholm (Sweden)

    2011-08-15

    A joint initiative of the European Association of Nuclear Medicine (EANM) Neuroimaging Committee and EANM Research Ltd. aimed to generate a European database of [{sup 123}I]FP-CIT single photon emission computed tomography (SPECT) scans of healthy controls. This study describes the characterization and harmonization of the imaging equipment of the institutions involved. {sup 123}I SPECT images of a striatal phantom filled with striatal to background ratios between 10:1 and 1:1 were acquired on all the gamma cameras with absolute ratios measured from aliquots. The images were reconstructed by a core lab using ordered subset expectation maximization (OSEM) without corrections (NC), with attenuation correction only (AC) and additional scatter and septal penetration correction (ACSC) using the triple energy window method. A quantitative parameter, the simulated specific binding ratio (sSBR), was measured using the ''Southampton'' methodology that accounts for the partial volume effect and compared against the actual values obtained from the aliquots. Camera-specific recovery coefficients were derived from linear regression and the error of the measurements was evaluated using the coefficient of variation (COV). The relationship between measured and actual sSBRs was linear across all systems. Variability was observed between different manufacturers and, to a lesser extent, between cameras of the same type. The NC and AC measurements were found to underestimate systematically the actual sSBRs, while the ACSC measurements resulted in recovery coefficients close to 100% for all cameras (AC range 69-89%, ACSC range 87-116%). The COV improved from 46% (NC) to 32% (AC) and to 14% (ACSC) (p < 0.001). A satisfactory linear response was observed across all cameras. Quantitative measurements depend upon the characteristics of the SPECT systems and their calibration is a necessary prerequisite for data pooling. Together with accounting for partial volume, the

  20. Design, Development and Testing of the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) Guidance, Navigation and Control System

    Science.gov (United States)

    Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.

    2003-01-01

    Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.

  1. A Camera and Multi-Sensor Automated Station Design for Polar Physical and Biological Systems Monitoring: AMIGOS

    Science.gov (United States)

    Bohlander, J. A.; Ross, R.; Scambos, T.; Haran, T. M.; Bauer, R. J.

    2012-12-01

    The Automated Meteorology - Ice/Indigenous species - Geophysics Observation System (AMIGOS) consists of a set of measurement instruments and camera(s) controlled by a single-board computer with a simplified Linux operating system and an Iridium satellite modem supporting two-way communication. Primary features of the system relevant to polar operations are low power requirements, daily data uploading, reprogramming, tolerance for low temperatures, and various approaches for automatic resets and recovery from low power or cold shut-down. Instruments include a compact weather station, C/A or dual-frequency GPS, solar flux and reflectivity sensors, sonic snow gages, simplified radio-echo-sounder, and resistance thermometer string in the firn column. In the current state of development, there are two basic designs. One is intended for in situ observations of glacier conditions. The other design supports a high-resolution camera for monitoring biological or geophysical systems from short distances (100 m to 20 km). The stations have been successfully used in several locations for operational support, monitoring rapid ice changes in response to climate change or iceberg drift, and monitoring penguin colony activity. As of June, 2012, there are 9 AMIGOS systems installed, all on the Antarctic continent. The stations are a working prototype for a planned series of upgraded stations, currently termed 'Sentinels'. These stations would carry further instrumentation, communications, and processing capability to investigate ice - ocean interaction from ice tongue, ice shelf, or fjord coastline areas.

  2. Tower Camera

    Data.gov (United States)

    Oak Ridge National Laboratory — The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for...

  3. TOUCHSCREEN USING WEB CAMERA

    Directory of Open Access Journals (Sweden)

    Kuntal B. Adak

    2015-10-01

    Full Text Available In this paper we present a web camera based touchscreen system which uses a simple technique to detect and locate finger. We have used a camera and regular screen to achieve our goal. By capturing the video and calculating position of finger on the screen, we can determine the touch position and do some function on that location. Our method is very easy and simple to implement. Even our system requirement is less expensive compare to other techniques.

  4. Cardiac cameras.

    Science.gov (United States)

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  5. Geometric calibration of multi-sensor image fusion system with thermal infrared and low-light camera

    Science.gov (United States)

    Peric, Dragana; Lukic, Vojislav; Spanovic, Milana; Sekulic, Radmila; Kocic, Jelena

    2014-10-01

    A calibration platform for geometric calibration of multi-sensor image fusion system is presented in this paper. The accurate geometric calibration of the extrinsic geometric parameters of cameras that uses planar calibration pattern is applied. For calibration procedure specific software is made. Patterns used in geometric calibration are prepared with aim to obtain maximum contrast in both visible and infrared spectral range - using chessboards which fields are made of different emissivity materials. Experiments were held in both indoor and outdoor scenarios. Important results of geometric calibration for multi-sensor image fusion system are extrinsic parameters in form of homography matrices used for homography transformation of the object plane to the image plane. For each camera a corresponding homography matrix is calculated. These matrices can be used for image registration of images from thermal and low light camera. We implemented such image registration algorithm to confirm accuracy of geometric calibration procedure in multi-sensor image fusion system. Results are given for selected patterns - chessboard with fields made of different emissivity materials. For the final image registration algorithm in surveillance system for object tracking we have chosen multi-resolution image registration algorithm which naturally combines with a pyramidal fusion scheme. The image pyramids which are generated at each time step of image registration algorithm may be reused at the fusion stage so that overall number of calculations that must be performed is greatly reduced.

  6. TEMPERATURE AND HEAT FLOW WHEN TAPPING OF THE HARDENED STEEL USING DIFFERENT COOLING SYSTEMS TEMPERATURA Y FLUJO DE CALOR AL ROSCAR CON MACHOS ACERO ENDURECIDO UTILIZANDO DIVERSOS SISTEMAS DE LUBRICACIÓN

    Directory of Open Access Journals (Sweden)

    Lincoln Cardoso Brandão

    2009-08-01

    Full Text Available Machining hardened steels has always been a great challenge in metal cutting, particularly for tapping operations. In the present paper, temperature was assessed when tapping hardened AISI H13. Dry machining and two cooling/lubrication systems were used: flooded and minimum quantity of fluid (MQF with 20ml/h, both using mineral oil. The tapping operation was performed on 100 x 40 mm, 14 mm thick workpieces with 55 HRc. An implanted thermocouple technique was used for temperature measurement at distances very close to the highest thread diameter (at 0.1, 2.5 and 5.0 mm. Three thermocouples were used for each distance along the workpiece diameter at 3.0, 7.0 and 11.0 mm from the tap entrance. Measurements were replicated twice for each condition tested. An analytical theoretical heat conduction model was used to evaluate the temperature at the tool-workpiece interface and determine the heat flow and convection coefficient. The smallest temperature increase and heat flow were observed when using the flooded system, followed by the MQF, compared to the dry condition. The effect was directly proportional to the amount of lubricant applied, as well as with the MQF system, when compared to dry cutting.Trabajar los aceros endurecidos siempre ha sido un desafío para el corte de metales, particularmente en las operaciones de roscado. En el presente trabajo se mide la temperatura del acero AISI H13 endurecido, realizándose ensayos sin lubricación (seco y dos sistemas con lubricación: aceite lubricante en grandes cantidades y mínima cantidad de líquido (minimum quantity of fluid MQF a 20 ml/h, en ambos casos se utilizó aceite integral mineral. El roscado se realiza sobre probetas de prueba de 100 x 40 mm con 14mm de espesor y dureza de 55 HRc. Para medir la temperatura se utiliza la técnica de termocuplas situadas muy próximas al diámetro mayor del hilo de rosca (a 0.1, 2.5 y 5 mm de distancia. Se utiliza tres termocuplas a lo largo del espesor de

  7. Tests of a new CCD-camera based neutron radiography detector system at the reactor stations in Munich and Vienna

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, E.; Pleinert, H. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Schillinger, B. [Technische Univ. Muenchen (Germany); Koerner, S. [Atominstitut der Oesterreichischen Universitaeten, Vienna (Austria)

    1997-09-01

    The performance of the new neutron radiography detector designed at PSI with a cooled high sensitive CCD-camera was investigated under real neutronic conditions at three beam ports of two reactor stations. Different converter screens were applied for which the sensitivity and the modulation transfer function (MTF) could be obtained. The results are very encouraging concerning the utilization of this detector system as standard tool at the radiography stations at the spallation source SINQ. (author) 3 figs., 5 refs.

  8. High-resolution imaging of the Pluto-Charon system with the Faint Object Camera of the Hubble Space Telescope

    Science.gov (United States)

    Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.

    1994-01-01

    Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.

  9. Engineering study for pallet adapting the Apollo laser altimeter and photographic camera system for the Lidar Test Experiment on orbital flight tests 2 and 4

    Science.gov (United States)

    Kuebert, E. J.

    1977-01-01

    A Laser Altimeter and Mapping Camera System was included in the Apollo Lunar Orbital Experiment Missions. The backup system, never used in the Apollo Program, is available for use in the Lidar Test Experiments on the STS Orbital Flight Tests 2 and 4. Studies were performed to assess the problem associated with installation and operation of the Mapping Camera System in the STS. They were conducted on the photographic capabilities of the Mapping Camera System, its mechanical and electrical interface with the STS, documentation, operation and survivability in the expected environments, ground support equipment, test and field support.

  10. JANUS: the visible camera onboard the ESA JUICE mission to the Jovian system

    Science.gov (United States)

    Palumbo, Pasquale; Jaumann, Ralf; Cremonese, Gabriele; Hoffmann, Harald; Debei, Stefano; Della Corte, Vincenzo; Holland, Andrew; Lara, Luisa Maria

    2014-05-01

    The JUICE (JUpiter ICy moons Explorer) mission [1] was selected in May 2012 as the first Large mission in the frame of the ESA Cosmic Vision 2015-2025 program. JUICE is now in phase A-B1 and its final adoption is planned by late 2014. The mission is aimed at an in-depth characterization of the Jovian system, with an operational phase of about 3.5 years. Main targets for this mission will be Jupiter, its satellites and rings and the complex relations within the system. Main focus will be on the detailed investigation of three of Jupiter's Galilean satellites (Ganymede, Europa, and Callisto), thanks to several fly-bys and 9 months in orbit around Ganymede. JANUS (Jovis, Amorum ac Natorum Undique Scrutator) is the camera system selected by ESA to fulfill the optical imaging scientific requirements of JUICE. It is being developed by a consortium involving institutes in Italy, Germany, Spain and UK, supported by respective Space Agencies, with the support of Co-Investigators also from USA, France, Japan and Israel. The Galilean satellites Io, Europa, Ganymede and Callisto show an increase in geologic activity with decreasing distance to Jupiter [e.g., 2]. The three icy Galilean satellites Callisto, Ganymede and Europa show a tremendous diversity of surface features and differ significantly in their specific evolutionary paths. Each of these moons exhibits its own fascinating geologic history - formed by competition and also combination of external and internal processes. Their origins and evolutions are influenced by factors such as density, temperature, composition (volatile compounds), stage of differentiation, volcanism, tectonism, the rheological reaction of ice and salts to stress, tidal effects, and interactions with the Jovian magnetosphere and space. These interactions are still recorded in the present surface geology. The record of geological processes spans from possible cryovolcanism through widespread tectonism to surface degradation and impact cratering

  11. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    Science.gov (United States)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  12. The Hubble Wide Field Camera 3 Test of Surfaces in the Outer Solar System: Spectral Variation on Kuiper Belt Objects

    CERN Document Server

    Fraser, Wesley C; Glass, Florian

    2015-01-01

    Here we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 Test of Surfaces in the Outer Solar System. 12 targets were re-observed with the Wide Field Camera 3 in optical and NIR wavebands designed to compliment those used during the first visit. Additionally, all observations originally presented by Fraser and Brown (2012) were reanalyzed through the same updated photometry pipeline. A reanalysis of the optical and NIR colour distribution reveals a bifurcated optical colour distribution and only two identifiable spectral classes, each of which occupies a broad range of colours and have correlated optical and NIR colours, in agreement with our previous findings. We report the detection of significant spectral variations on 5 targets which cannot be attributed to photometry errors, cosmic rays, point spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have ...

  13. Calibration of robot tool centre point using camera-based system

    Directory of Open Access Journals (Sweden)

    Gordić Zaviša

    2016-01-01

    Full Text Available Robot Tool Centre Point (TCP calibration problem is of great importance for a number of industrial applications, and it is well known both in theory and in practice. Although various techniques have been proposed for solving this problem, they mostly require tool jogging or long processing time, both of which affect process performance by extending cycle time. This paper presents an innovative way of TCP calibration using a set of two cameras. The robot tool is placed in an area where images in two orthogonal planes are acquired using cameras. Using robust pattern recognition, even deformed tool can be identified on images, and information about its current position and orientation forwarded to control unit for calibration. Compared to other techniques, test results show significant reduction in procedure complexity and calibration time. These improvements enable more frequent TCP checking and recalibration during production, thus improving the product quality.

  14. A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research

    Directory of Open Access Journals (Sweden)

    Ralph P. Mason

    2013-07-01

    Full Text Available Bioluminescent imaging (BLI of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung.

  15. AUTHENTIC: a very low-cost infrared detector and camera system

    Science.gov (United States)

    Mansi, Mike V.; Brookfield, Martin; Porter, Stephen G.; Edwards, Ivan; Bold, Brendon; Shannon, John; Lambkin, Paul; Mathewson, Alan

    2003-01-01

    An Oxide over Titanium metal resistance bolometer technology developed by NMRC, Ireland) has been transferred to the X-FAB UK CMOS foundry at Plymouth, UK. Prototypes of the bolometers have been manufactured in the X-FAB production facility and tests show performance comparable with the NMRC prototypes. The bolometer design has been integrated with a CMOS read-out chip and the first wafers are currently being packaged for evaluation. The development of a low cost thermal imaging camera using the detector is under way. We present an overview of the detector and camera design, together with preliminary results from the detector test programme. The work is partly funded by the European Union IST programme.

  16. Opto-mechanical design of the G-CLEF flexure control camera system

    Science.gov (United States)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  17. The M31 pixel lensing plan campaign: MACHO lensing and self-lensing signals

    Energy Technology Data Exchange (ETDEWEB)

    Calchi Novati, S.; Scarpetta, G. [Istituto Internazionale per gli Alti Studi Scientifici (IIASS), Via Pellegrino 19, I-84019 Vietri Sul Mare (Italy); Bozza, V. [Dipartimento di Fisica E. R. Caianiello, Università di Salerno, Via Giovanni Paolo II 132, I-84084 Fisciano (Italy); Bruni, I.; Gualandi, R. [INAF, Osservatorio Astronomico di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Dall' Ora, M. [INAF, Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, I-80131 Napoli (Italy); De Paolis, F.; Ingrosso, G.; Nucita, A.; Strafella, F. [Dipartimento di Matematica e Fisica E. De Giorgi, Università del Salento, CP 193, I-73100 Lecce (Italy); Dominik, M. [SUPA, University of St Andrews, School of Physics and Astronomy, North Haugh, St Andrews, KY16 9SS (United Kingdom); Jetzer, Ph. [Institute for Theoretical Physics, University of Zürich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Mancini, L. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Safonova, M.; Subramaniam, A. [Indian Institute of Astrophysics, Bangalore 560 034 (India); Sereno, M. [Dipartimento di Scienza Applicata e Tecnologia, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino (Italy); Gould, A. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Collaboration: PLAN Collaboration

    2014-03-10

    We present the final analysis of the observational campaign carried out by the PLAN (Pixel Lensing Andromeda) collaboration to detect a dark matter signal in form of MACHOs through the microlensing effect. The campaign consists of about 1 month/year observations carried out over 4 years (2007-2010) at the 1.5 m Cassini telescope in Loiano (Astronomical Observatory of BOLOGNA, OAB) plus 10 days of data taken in 2010 at the 2 m Himalayan Chandra Telescope monitoring the central part of M31 (two fields of about 13' × 12.'6). We establish a fully automated pipeline for the search and the characterization of microlensing flux variations. As a result, we detect three microlensing candidates. We evaluate the expected signal through a full Monte Carlo simulation of the experiment completed by an analysis of the detection efficiency of our pipeline. We consider both 'self lensing' and 'MACHO lensing' lens populations, given by M31 stars and dark matter halo MACHOs, in M31 and the Milky Way, respectively. The total number of events is consistent with the expected self-lensing rate. Specifically, we evaluate an expected signal of about two self-lensing events. As for MACHO lensing, for full 0.5(10{sup –2}) M {sub ☉} MACHO halos, our prediction is for about four (seven) events. The comparatively small number of expected MACHO versus self-lensing events, together with the small number statistics at our disposal, do not enable us to put strong constraints on that population. Rather, the hypothesis, suggested by a previous analysis, on the MACHO nature of OAB-07-N2, one of the microlensing candidates, translates into a sizeable lower limit for the halo mass fraction in form of the would-be MACHO population, f, of about 15% for 0.5 M {sub ☉} MACHOs.

  18. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  19. Stereoscopic camera and viewing systems with undistorted depth presentation and reduced or eliminated erroneous acceleration and deceleration perceptions, or with perceptions produced or enhanced for special effects

    Science.gov (United States)

    Diner, Daniel B. (Inventor)

    1991-01-01

    Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.

  20. Galactic Bulge Microlensing Events from the MACHO Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, C L; Griest, K; Popowski, P; Cook, K H; Drake, A J; Minniti, D; Myer, D G; Alcock, C; Allsman, R A; Alves, D R; Axelrod, T S; Becker, A C; Bennett, D P; Freeman, K C; Geha, M; Lehner, M J; Marshall, S L; Nelson, C A; Peterson, B A; Quinn, P J; Stubbs, C W; Sutherland, W; Vandehei, T; Welch, D L

    2005-06-16

    The authors present a catalog of 450 relatively high signal-to-noise microlensing events observed by the MACHO collaboration between 1993 and 1999. The events are distributed throughout the fields and, as expected, they show clear concentration toward the Galactic center. No optical depth is given for this sample since no blending efficiency calculation has been performed, and they find evidence for substantial blending. In a companion paper they give optical depths for the sub-sample of events on clump giant source stars, where blending is a less significant effect. Several events with sources that may belong to the Sagittarius dwarf galaxy are identified. For these events even relatively low dispersion spectra could suffice to classify these events as either consistent with Sagittarius membership or as non-Sagittarius sources. Several unusual events, such as microlensing of periodic variable source stars, binary lens events, and an event showing extended source effects are identified. They also identify a number of contaminating background events as cataclysmic variable stars.

  1. Reconfigurable ASIC for a Low Level Trigger System in Cherenkov Telescope Cameras

    CERN Document Server

    Gascon, David; Blanch, Oscar; Boix, Joan; Delagnes, Eric; Delgado, Carlos; Freixas, Lluís; Guilloux, Fabrice; López-Coto, Rubén; Griffiths, Scott; Martínez, Gustavo; Martínez, Oscar; Sanuy, Andreu; Tejedor, Luis Ángel

    2016-01-01

    A versatile and reconfigurable ASIC is presented, which implements two different concepts of low level trigger (L0) for Cherenkov telescopes: the Majority trigger (sum of discriminated inputs) and the Sum trigger concept (analogue clipped sum of inputs). Up to 7 input signals can be processed following one or both of the previous trigger concepts. Each differential pair output of the discriminator is also available as a LVDS output. Differential circuitry using local feedback allows the ASIC to achieve high speed (500 MHz) while maintaining good linearity in a 1 Vpp range. Experimental results are presented. A number of prototype camera designs of the Cherenkov Telescope Array (CTA) project will use this ASIC.

  2. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  3. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  4. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    Science.gov (United States)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A. E.; Engelhardt, M.

    2005-04-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2×107 cm s, which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300×1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200 rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points.

  5. Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras

    Directory of Open Access Journals (Sweden)

    Xiaoqin Wang

    2014-12-01

    Full Text Available We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses.

  6. Strategy for the development of a smart NDVI camera system for outdoor plant detection and agricultural embedded systems.

    Science.gov (United States)

    Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe

    2013-01-24

    The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.

  7. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-06-01

    Full Text Available For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP system combining Multi-View Stereovision (MVS with the Structure from Motion (SfM algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98 and 0.57 mm (R2 = 0.99, respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency.

  8. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System.

    Science.gov (United States)

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-06-14

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R² = 0.98) and 0.57 mm (R² = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency.

  9. Image Analysis of OSIRIS-REx Touch-And-Go Camera System (TAGCAMS) Thermal Vacuum Test Images

    Science.gov (United States)

    Everett Gordon, Kenneth; Bos, Brent J.

    2017-01-01

    The objective of NASA’s OSIRIS-REx Asteroid Sample Return Mission, which launched in September 2016, is to travel to the near-Earth asteroid 101955 Bennu, survey and map the asteroid, and return a scientifically interesting sample to Earth in 2023. As a part of its suite of integrated sensors, the OSIRIS-REx spacecraft includes a Touch-And-Go Camera System (TAGCAMS). The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, acquisition of the asteroid sample, and confirmation of the asteroid sample stowage in the spacecraft’s Sample Return Capsule (SRC). After first being calibrated at Malin Space Science Systems (MSSS) at the instrument level, the TAGCAMS were then transferred to Lockheed Martin (LM), where they were put through a progressive series of spacecraft-level environmental tests. These tests culminated in a several-week long, spacecraft-level thermal vacuum (TVAC) test during which hundreds of images were recorded. To analyze the images, custom codes were developed using MATLAB R2016a programming software. For analyses of the TAGCAMS dark images, the codes observed the dark current level for each of the images as a function of the camera-head temperature. Results confirm that the detector dark current noise has not increased and follows similar trends to the results measured at the instrument-level by MSSS. This indicates that the electrical performance of the camera system is stable, even after integration with the spacecraft, and will provide imagery with the required signal-to-noise ratio during spaceflight operations. During the TVAC testing, the TAGCAMS were positioned to view optical dot targets suspended in the chamber. Results for the TAGCAMS light images using a centroid analysis on the positions of the optical target holes indicate that the boresight pointing of the two navigation cameras depend on spacecraft temperature, but will not change by more than ten pixels (approximately 2

  10. ORIS: the Oak Ridge Imaging System program listings. [Nuclear medicine imaging with rectilinear scanner and gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Bell, P. R.; Dougherty, J. M.

    1978-04-01

    The Oak Ridge Imaging System (ORIS) is a general purpose access, storage, processing and display system for nuclear medicine imaging with rectilinear scanner and gamma camera. This volume contains listings of the PDP-8/E version of ORIS Version 2. The system is designed to run under the Digital Equipment Corporation's OS/8 monitor in 16K or more words of core. System and image file mass storage is on RK8E disk; longer-time image file storage is provided on DECtape. Another version of this program exists for use with the RF08 disk, and a more limited version is for DECtape only. This latter version is intended for non-medical imaging.

  11. Optimal camera exposure for video surveillance systems by predictive control of shutter speed, aperture, and gain

    Science.gov (United States)

    Torres, Juan; Menéndez, José Manuel

    2015-02-01

    This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together

  12. Adaptive strategies of remote systems operators exposed to perturbed camera-viewing conditions

    Science.gov (United States)

    Stuart, Mark A.; Manahan, Meera K.; Bierschwale, John M.; Sampaio, Carlos E.; Legendre, A. J.

    1991-01-01

    This report describes a preliminary investigation of the use of perturbed visual feedback during the performance of simulated space-based remote manipulation tasks. The primary objective of this NASA evaluation was to determine to what extent operators exhibit adaptive strategies which allow them to perform these specific types of remote manipulation tasks more efficiently while exposed to perturbed visual feedback. A secondary objective of this evaluation was to establish a set of preliminary guidelines for enhancing remote manipulation performance and reducing the adverse effects. These objectives were accomplished by studying the remote manipulator performance of test subjects exposed to various perturbed camera-viewing conditions while performing a simulated space-based remote manipulation task. Statistical analysis of performance and subjective data revealed that remote manipulation performance was adversely affected by the use of perturbed visual feedback and performance tended to improve with successive trials in most perturbed viewing conditions.

  13. FPGA-Based HD Camera System for the Micropositioning of Biomedical Micro-Objects Using a Contactless Micro-Conveyor

    Directory of Open Access Journals (Sweden)

    Elmar Yusifli

    2017-03-01

    Full Text Available With recent advancements, micro-object contactless conveyers are becoming an essential part of the biomedical sector. They help avoid any infection and damage that can occur due to external contact. In this context, a smart micro-conveyor is devised. It is a Field Programmable Gate Array (FPGA-based system that employs a smart surface for conveyance along with an OmniVision complementary metal-oxide-semiconductor (CMOS HD camera for micro-object position detection and tracking. A specific FPGA-based hardware design and VHSIC (Very High Speed Integrated Circuit Hardware Description Language (VHDL implementation are realized. It is done without employing any Nios processor or System on a Programmable Chip (SOPC builder based Central Processing Unit (CPU core. It keeps the system efficient in terms of resource utilization and power consumption. The micro-object positioning status is captured with an embedded FPGA-based camera driver and it is communicated to the Image Processing, Decision Making and Command (IPDC module. The IPDC is programmed in C++ and can run on a Personal Computer (PC or on any appropriate embedded system. The IPDC decisions are sent back to the FPGA, which pilots the smart surface accordingly. In this way, an automated closed-loop system is employed to convey the micro-object towards a desired location. The devised system architecture and implementation principle is described. Its functionality is also verified. Results have confirmed the proper functionality of the developed system, along with its outperformance compared to other solutions.

  14. Invention and validation of an automated camera system that uses optical character recognition to identify patient name mislabeled samples.

    Science.gov (United States)

    Hawker, Charles D; McCarthy, William; Cleveland, David; Messinger, Bonnie L

    2014-03-01

    Mislabeled samples are a serious problem in most clinical laboratories. Published error rates range from 0.39/1000 to as high as 1.12%. Standardization of bar codes and label formats has not yet achieved the needed improvement. The mislabel rate in our laboratory, although low compared with published rates, prompted us to seek a solution to achieve zero errors. To reduce or eliminate our mislabeled samples, we invented an automated device using 4 cameras to photograph the outside of a sample tube. The system uses optical character recognition (OCR) to look for discrepancies between the patient name in our laboratory information system (LIS) vs the patient name on the customer label. All discrepancies detected by the system's software then require human inspection. The system was installed on our automated track and validated with production samples. We obtained 1 009 830 images during the validation period, and every image was reviewed. OCR passed approximately 75% of the samples, and no mislabeled samples were passed. The 25% failed by the system included 121 samples actually mislabeled by patient name and 148 samples with spelling discrepancies between the patient name on the customer label and the patient name in our LIS. Only 71 of the 121 mislabeled samples detected by OCR were found through our normal quality assurance process. We have invented an automated camera system that uses OCR technology to identify potential mislabeled samples. We have validated this system using samples transported on our automated track. Full implementation of this technology offers the possibility of zero mislabeled samples in the preanalytic stage.

  15. Measurement of liquid film flow on nuclear rod bundle in micro-scale by using very high speed camera system

    Science.gov (United States)

    Pham, Son; Kawara, Zensaku; Yokomine, Takehiko; Kunugi, Tomoaki

    2012-11-01

    Playing important roles in the mass and heat transfer as well as the safety of boiling water reactor, the liquid film flow on nuclear fuel rods has been studied by different measurement techniques such as ultrasonic transmission, conductivity probe, etc. Obtained experimental data of this annular two-phase flow, however, are still not enough to construct the physical model for critical heat flux analysis especially at the micro-scale. Remain problems are mainly caused by complicated geometry of fuel rod bundles, high velocity and very unstable interface behavior of liquid and gas flow. To get over these difficulties, a new approach using a very high speed digital camera system has been introduced in this work. The test section simulating a 3×3 rectangular rod bundle was made of acrylic to allow a full optical observation of the camera. Image data were taken through Cassegrain optical system to maintain the spatiotemporal resolution up to 7 μm and 20 μs. The results included not only the real-time visual information of flow patterns, but also the quantitative data such as liquid film thickness, the droplets' size and speed distributions, and the tilt angle of wavy surfaces. These databases could contribute to the development of a new model for the annular two-phase flow. Partly supported by the Global Center of Excellence (G-COE) program (J-051) of MEXT, Japan.

  16. A performance study of an electron-tracking Compton camera with a compact system for environmental gamma-ray observation

    CERN Document Server

    Mizumoto, Tetsuya; Takada, Atsushi; Tanimori, Toru; Komura, Shotaro; Kubo, Hidetoshi; Matsuoka, Yoshihiro; Mizumura, Yoshitaka; Nakamura, Kiseki; Nakamura, Shogo; Oda, Makoto; Parker, Joseph D; Sawano, Tatsuya; Bando, Naoto; Nabetani, Akira

    2015-01-01

    An electron-tracking Compton camera (ETCC) is a detector that can determine the arrival direction and energy of incident sub-MeV/MeV gamma-ray events on an event-by-event basis. It is a hybrid detector consisting of a gaseous time projection chamber (TPC), that is the Compton-scattering target and the tracker of recoil electrons, and a position-sensitive scintillation camera that absorbs of the scattered gamma rays, to measure gamma rays in the environment from contaminated soil. To measure of environmental gamma rays from soil contaminated with radioactive cesium (Cs), we developed a portable battery-powered ETCC system with a compact readout circuit and data-acquisition system for the SMILE-II experiment. We checked the gamma-ray imaging ability and ETCC performance in the laboratory by using several gamma-ray point sources. The performance test indicates that the field of view (FoV) of the detector is about 1$\\;$sr and that the detection efficiency and angular resolution for 662$\\;$keV gamma rays from the ...

  17. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera.

    Science.gov (United States)

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-04-14

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

  18. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    Directory of Open Access Journals (Sweden)

    Antonio Lagudi

    2016-04-01

    Full Text Available The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

  19. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades

    Science.gov (United States)

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  20. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  1. Uncooled radiometric camera performance

    Science.gov (United States)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  2. Automated Placement of Multiple Stereo Cameras

    OpenAIRE

    Malik, Rahul; Bajcsy, Peter

    2008-01-01

    International audience; This paper presents a simulation framework for multiple stereo camera placement. Multiple stereo camera systems are becoming increasingly popular these days. Applications of multiple stereo camera systems such as tele-immersive systems enable cloning of dynamic scenes in real-time and delivering 3D information from multiple geographic locations to everyone for viewing it in virtual (immersive) 3D spaces. In order to make such multi stereo camera systems ubiquitous, sol...

  3. Camera link to HD-SDI interface converting system using FPGA%基于 FPGA 的 Camera Link 转 HD-SDI 接口转换系统

    Institute of Scientific and Technical Information of China (English)

    陈东成; 朱明; 郝志成; 刘剑

    2014-01-01

    Aiming at the shortcoming that Camera Link has complex interface and can only transfer in short distance,an interface converting system converting Camer Link to HD-SDI is designed.A high performance FPGA EP2S60F1020 of Altera Company is used to acquire the image data and code the data based on SMPTE 274M standard.To solve the problem that the horizontal and vertical time are different between Camera Link and HD-SDI,3 pieces of SDRAM are used as frame buffer.And the output frame of HD-SDI is delayed for a frame.The coded data is output to a serializer LMH0030, and the HD-SDI data is the output.As the vertical time of Camera Link and HD-SDI is not accurate the same,a frame must be dropped after every 708 frames.But the images being processed by the FP-GA are not dropped.The result shows that the system can convert images coming from Camera Link to HD-SDI.And the data output can be acquired by a SDI acquiring card.%由于 Camera Link 相机具有接口复杂、传输距离近等局限性,设计并实现了一种基于 FPGA 的 Camera Link 转HD-SDI 接口转换系统。该系统采用 Altera 公司的 EP2S60F1020高性能 FPGA 完成图像数据的采集并按 SMPTE 274M 标准编码;为解决 Camera Link 相机输出数据同 HD-SDI 输出图像行、场时间不同的问题,采用3片 SDRAM 作为帧缓存模块,延迟1帧输出;编码完成的数据输出到并串转换芯片 LMH0030,从而得到 HD-SDI 格式的视频输出。由于Camera Link 相机输出数据同 HD-SDI 输出图像的帧频并不绝对相同,每隔708帧必须丢去一帧数据,从而导致输出时固定丢帧,但 FPGA 对图像的处理并不会丢帧。实验结果表明,本系统能够将 Camera Link 相机输出的图像数据转换成HD-SDI 输出,并用采集卡采集到图像数据。

  4. The Dark Energy Camera

    CERN Document Server

    Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

    2015-01-01

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

  5. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  6. The Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). et al.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  7. Development of an Automated System to Test and Select CCDs for the Dark Energy Survey Camera (DECam)

    Science.gov (United States)

    Kubik, Donna; Dark Energy Survey Collaboration

    2009-01-01

    The Dark Energy Survey (DES) is a next generation sky survey aimed directly at understanding why the universe is expanding at an accelerating rate. The survey will use the Dark Energy Camera (DECam), a 3 square degree, 500 Megapixel mosaic camera mounted at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory, to observe 5000 square-degrees of sky through 5 filters (g, r, i, z, Y). DECam will be comprised of 74 CCDs: 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The goal of the DES is to provide a factor of 3-5 improvement in the Dark Energy Task Force Figure of Merit using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type IA supernovae. This goal sets stringent technical requirements for the CCDs. Testing a large number of CCDs to determine which best meet the DES requirements would be a very time-consuming manual task. We have developed a system to automatically collect and analyze CCD test data. The test results are entered into an online SQL database which facilitates selection of those CCDs that best meet the technical specifications for charge transfer efficiency, linearity, full well, quantum efficiency, noise, dark current, cross talk, diffusion, and cosmetics.

  8. Stellar Occultations in the Coma of Comet 67/P Chuyumov-Gerasimenko Observed by the OSIRIS Camera System

    Science.gov (United States)

    Moissl, Richard; Kueppers, Michael

    2016-10-01

    In this paper we present the results of an analysis on a large part of the existing Image data from the OSIRIS camera system onboard the Rosetta Spacecraft, in which stars of sufficient brightness (down to a limiting magnitude of 6) have been observed through the coma of Comet 67/P Churyumov-Gerasimenko ("C-G"). Over the course of the Rosetta main mission the Coma of the comet underwent large changes in density and structure, owed to the changing insolation along the orbit of C-G. We report on the changes of the stellar signals in the wavelength ranges, covered by the filters of the OSIRIS Narrow-Angle (NAC) and Wide-Angle (WAC) cameras.Acknowledgements: OSIRIS was built by a consortium led by the Max-Planck-Institut für Sonnensystemforschung, Göttingen, Germany, in collaboration with CISAS, University of Padova, Italy, the Laboratoire d'Astrophysique de Marseille, France, the Instituto de Astrofísica de Andalucia, CSIC, Granada, Spain, the Scientific Support Office of the European Space Agency, Noordwijk, The Netherlands, the Instituto Nacional de Técnica Aeroespacial, Madrid, Spain, the Universidad Politéchnica de Madrid, Spain, the Department of Physics and Astronomy of Uppsala University, Sweden, and the Institut für Datentechnik und Kommunikationsnetze der Technischen Universität Braunschweig, Germany.

  9. Development of the data acquisition system for the x-ray CCD camera (SXI) onboard ASTRO-H

    Science.gov (United States)

    Fujinaga, Takahisa; Anabuki, Naohisa; Aoyama, Shoichi; Kawano, Hidenori; Ikeda, Shoma; Iwai, Masachika; Ozaki, Masanobu; Dotani, Tadayasu; Natsukari, Chikara; Matsuta, Keiko; Shimizu, Kazuma; Nakajima, Hiroshi; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Ueda, Shutaro; Komatsu, Shoji; Murayoshi, Taku; Mori, Koji; Watanabe, Tatsuo; Uchida, Hiroyuki; Ohnishi, Takao; Hiraga, Junko S.

    2011-09-01

    We present the development of the data acquisition system for the X-ray CCD camera (SXI: Soft X-ray Imager) onboard the ASTRO-H satellite. Two types of breadboard models (BBMs) of SXI electronics have been produced to verify the functions of each circuit board and to establish the data acquisition system from CCD to SpaceWire (SpW) I/F. Using BBM0, we verified the basic design of the CCD driver, function of the Δ∑-ADC, data acquisition of the frame image, and stability of the SpW communication. We could demonstrate the energy resolution of 164 eV (FWHM) at 5.9 keV. Using BBM1, we verified acquisition of the housekeeping information and the frame images.

  10. Multispectral fluorescence guided surgery; a feasibility study in a phantom using a clinical-grade laparoscopic camera system.

    Science.gov (United States)

    van Willigen, Danny M; van den Berg, Nynke S; Buckle, Tessa; KleinJan, Gijs H; Hardwick, James C; van der Poel, Henk G; van Leeuwen, Fijs Wb

    2017-01-01

    Although the possibilities in image guided surgery are advancing rapidly, complex surgical procedures such as nerve sparing prostatectomy still lack precision regarding differentiation between diseased and delicate anatomical structures. Here, the use of complementary fluorescent tracers in combination with a dedicated multispectral fluorescence camera system could support differentiation between healthy and diseased tissue. In this study, we provide proof of concept data indicating how a modified clinical-grade fluorescence laparoscope can be used to sensitively detect white light and three fluorescent dyes (fluorescein, Cy5, and ICG) in a sequential fashion. Following detailed analysis of the system properties and detection capabilities, the potential of laparoscopic three-color multispectral imaging in combination with white light imaging is demonstrated in a phantom set-up for prostate cancer.

  11. A fast framing camera system for observation of acceleration and ablation of cryogenic hydrogen pellet in ASDEX Upgrade plasmas

    Science.gov (United States)

    Kocsis, G.; Kálvin, S.; Veres, G.; Cierpka, P.; Lang, P. T.; Neuhauser, J.; Wittman, C.; ASDEX Upgrade Team

    2004-11-01

    An observation system using fast digital cameras was developed to measure a cryogenic hydrogen pellet's cloud structure, trajectory, and velocity changes during its ablation in ASDEX Upgrade plasmas. In this article the system, the applied numerical methods, and the results are presented. The three-dimensional pellet trajectory and velocity components were reconstructed from images of observations from two different directions. Pellet acceleration both in the radial and toroidal directions was detected. The pellet cloud distribution was measured with high spatio-temporal resolution. The cloud surrounding the pellet was found to be elongated along the magnetic field lines. Its typical size is 5-7 cm along the field lines and 2 cm in the perpendicular directions. A cloud extension in the poloidal direction was also observed which may be related to the drift of the detached part of the cloud.

  12. Auto-measuring system of aero-camera lens focus using linear CCD

    Science.gov (United States)

    Zhang, Yu-ye; Zhao, Yu-liang; Wang, Shu-juan

    2014-09-01

    The automatic and accurate focal length measurement of aviation camera lens is of great significance and practical value. The traditional measurement method depends on the human eye to read the scribed line on the focal plane of parallel light pipe by means of reading microscope. The method is of low efficiency and the measuring results are influenced by artificial factors easily. Our method used linear array solid-state image sensor instead of reading microscope to transfer the imaging size of specific object to be electrical signal pulse width, and used computer to measure the focal length automatically. In the process of measurement, the lens to be tested placed in front of the object lens of parallel light tube. A couple of scribed line on the surface of the parallel light pipe's focal plane were imaging on the focal plane of the lens to be tested. Placed the linear CCD drive circuit on the image plane, the linear CCD can convert the light intensity distribution of one dimension signal into time series of electrical signals. After converting, a path of electrical signals is directly brought to the video monitor by image acquisition card for optical path adjustment and focusing. The other path of electrical signals is processed to obtain the pulse width corresponding to the scribed line by electrical circuit. The computer processed the pulse width and output focal length measurement result. Practical measurement results showed that the relative error was about 0.10%, which was in good agreement with the theory.

  13. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  14. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    Science.gov (United States)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  15. Efecto del color del oponente en el desencadenamiento de la agresividad en los machos de Betta splendens

    OpenAIRE

    Remón Ugarte, Estíbaliz

    2012-01-01

    [ES] Los machos de Betta splendens se muestran más agresivos ante la presencia de otros machos de la misma especie y color similar al suyo, en comparación con la agresividad desplegada ante machos de tonalidad distinta [EN] Betta splendens males show greater aggressive responses when facing against other cospecific males of similar color in comparison with males of different color

  16. Constraints on MACHO Dark Matter from the Star Cluster in the Dwarf Galaxy Eridanus II

    CERN Document Server

    Brandt, Timothy D

    2016-01-01

    I show that a recently discovered star cluster near the center of the ultra-faint dwarf galaxy Eridanus II provides strong constraints on massive compact halo objects (MACHOs) of >~5 M_sun as the main component of dark matter. MACHO dark matter will dynamically heat the cluster, driving it to larger sizes and higher velocity dispersions until it dissolves into its host galaxy. The star cluster has a luminosity of just ~2000 L_sun and is relatively puffy, with a half-light radius of 13 pc, making it much more fragile than other known clusters in dwarf galaxies. For a wide range of plausible dark matter halo properties, Eri II's star cluster combines with existing constraints from microlensing, wide binaries, and disk kinematics to rule out dark matter composed entirely of MACHOs from ~10$^{-7}$ M_sun up to arbitrarily high masses. The cluster in Eri II closes the ~20--100 M_sun window of allowed MACHO dark matter and provides much stronger constraints than wide Galactic binaries for MACHOs of up to thousands o...

  17. An improved quasar detection method in EROS-2 and MACHO LMC datasets

    CERN Document Server

    Pichara, Karim; Kim, Dae-Won; Marquette, Jean-Baptiste; Tisserand, Patrick; 10.1111/j.1365-2966.2012.22061.x

    2013-01-01

    We present a new classification method for quasar identification in the EROS-2 and MACHO datasets based on a boosted version of Random Forest classifier. We use a set of variability features including parameters of a continuous auto regressive model. We prove that continuous auto regressive parameters are very important discriminators in the classification process. We create two training sets (one for EROS-2 and one for MACHO datasets) using known quasars found in the LMC. Our model's accuracy in both EROS-2 and MACHO training sets is about 90% precision and 86% recall, improving the state of the art models accuracy in quasar detection. We apply the model on the complete, including 28 million objects, EROS-2 and MACHO LMC datasets, finding 1160 and 2551 candidates respectively. To further validate our list of candidates, we crossmatched our list with a previous 663 known strong candidates, getting 74% of matches for MACHO and 40% in EROS-2. The main difference on matching level is because EROS-2 is a slightly...

  18. SO2 flux monitoring at Stromboli with the new permanent INGV SO2 camera system: A comparison with the FLAME network and seismological data

    Science.gov (United States)

    Burton, M. R.; Salerno, G. G.; D'Auria, L.; Caltabiano, T.; Murè, F.; Maugeri, R.

    2015-07-01

    We installed a permanent SO2 camera system on Stromboli, Italy, in May 2013, in order to improve our capacity to monitor the SO2 emissions from this volcano. The camera collects images of SO2 concentrations with a period of 10 s, allowing quantification of short-term processes, such as the gas released during the frequent explosions which are synonymous with Stromboli. It also allows quantification of the quiescent gas flux, and therefore comparison with the FLAME network of scanning ultraviolet spectrometers previously installed on the island. Analysis of results from the SO2 camera demonstrated a good agreement with the FLAME network when the plume was blown fully into the field of view of the camera. Permanent volcano monitoring with SO2 cameras is still very much in its infancy, and therefore this finding is a significant step in the use of such cameras for monitoring, whilst also highlighting the requirement of a favourable wind direction and strength. We found that the explosion gas emissions are correlated with seismic events which have a very long period component. There is a variable time lag between event onset time and the increase in gas flux observed by the camera as the explosion gas advects into the field of view of the camera. This variable lag is related to the plume direction, as shown by comparison with the plume location detected with the FLAME network. The correlation between explosion gas emissions and seismic signal amplitude show is consistent with a gas slug-driven mechanism for seismic event production. Comparison of the SO2 camera measurements of the quiescent gas flux shows a fair quantitative agreement with the SO2 flux measured with the FLAME network. Overall, the SO2 camera complements the FLAME network well, as it allows frequent quantification of the explosion gas flux produced by Stromboli, whose signal is in general too brief to be measured with the FLAME network. Further work is required, however, to fully automate the

  19. EDUCATING THE PEOPLE AS A DIGITAL PHOTOGRAPHER AND CAMERA OPERATOR VIA OPEN EDUCATION SYSTEM STUDIES FROM TURKEY: Anadolu University Open Education Faculty Case

    Directory of Open Access Journals (Sweden)

    Huseyin ERYILMAZ

    2010-04-01

    Full Text Available Today, Photography and visual arts are very important in our modern life. Especially for the mass communication, the visual images and visual arts have very big importance. In modern societies, people must have knowledge about the visual things, such as photographs, cartoons, drawings, typography, etc. Briefly, the people need education on visual literacy.In today’s world, most of the people in the world have a digital camera for photography or video image. But it is not possible to give people, visual literacy education in classic school system. But the camera users need a teaching medium for using their cameras effectively. So they are trying to use internet opportunities, some internet websites and pages as an information source. But as the well known problem, not all the websites give the correct learning or know-how on internet. There are a lot of mistakes and false information. Because of the reasons given above, Anadolu University Open Education Faculty is starting a new education system to educate people as a digital photographer and camera person in 2009. This program has very importance as a case study. The language of photography and digital technology is in English. Of course, not all the camera users understand English language. So, owing to this program, most of the camera users and especially people who is working as an operator in studios will learn a lot of things on photography, digital technology and camera systems. On the other hand, these people will learn about composition, visual image's history etc. Because of these reasons, this program is very important especially for developing countries. This paper will try to discuss this subject.

  20. Multi-camera systems for rehabilitation therapies:a study of the precision of Microsoft Kinect sensors

    Institute of Scientific and Technical Information of China (English)

    Miguel OLIVER; Francisco MONTERO; Jos Pascual MOLINA; Pascual GONZLEZ; Antonio FERNNDEZ-CABALLERO

    2016-01-01

    This paper seeks to determine how the overlap of several infrared beams affects the tracked position of the user, depending on the angle of incidence of light, distance to the target, distance between sensors, and the number of capture devices used. We also try to show that under ideal conditions using several Kinect sensors increases the precision of the data collected. The results obtained can be used in the design of telerehabilitation environments in which several RGB-D cameras are needed to improve precision or increase the tracking range. A numerical analysis of the results is included and comparisons are made with the results of other studies. Finally, we describe a system that implements intelligent methods for the rehabilitation of patients based on the results of the tests carried out.

  1. On Planetary Companions to the MACHO-98-BLG-35 Microlens Star

    CERN Document Server

    Rhie, S H; Becker, A C; Peterson, B A; Fragile, P C; Johnson, B R; Quinn, J L; Crouch, A; Gray, J; King, L; Messenger, B B; Thomson, S; Bond, I A; Abe, F; Carter, B S; Dodd, R J; Hearnshaw, J B; Honda, M; Juga Ku Jun; Kabe, S; Kilmartin, P M; Koribalski, B S; Masuda, K; Matsubara, Y; Muraki, Y; Nakamura, T; Nankivell, G R; Noda, S; Rattenbury, N J; Reid, M; Rumsey, N J; Saitô, T; Sato, H; Sato, S; Sekiguchi, M; Sullivan, D J; Sumi, T; Watase, Y; Yanagisawa, T; Yock, P C M; Yoshizawa, M; Saito, To.

    1999-01-01

    We present observations of microlensing event MACHO-98-BLG-35 which reached a peak magnification factor of almost 80. These observations by the Microlensing Planet Search (MPS) and the MOA Collaborations place strong constraints on the possible planetary system of the lens star and show intriguing evidence of a low mass planet with a mass fraction 4*10^{-5} < \\epsilon < 2*10^{-4}. A giant planet with \\epsilon = 10^{-3} is excluded from 95% of the region between 0.4 and 2.5 R_E from the lens star, where R_E is the Einstein ring radius of the lens. This exclusion region is more extensive than the generic "lensing zone" which is 0.6 - 1.6 R_E. For smaller mass planets, we can exclude 57% of the "lensing zone" for \\epsilon = 10^{-4} and 14% of the lensing zone for \\epsilon = 10^{-5}. The mass fraction \\epsilon = 10^{-5} corresponds to an Earth mass planet for a lensing star of mass ~0.3 M_{sun}. A number of similar events will provide statistically significant constraints on the prevalence of Earth mass pla...

  2. An intelligent automated door control system based on a smart camera

    National Research Council Canada - National Science Library

    Yang, Jie-Ci; Lai, Chin-Lun; Sheu, Hsin-Teng; Chen, Jiann-Jone

    2013-01-01

    This paper presents an innovative access control system, based on human detection and path analysis, to reduce false automatic door system actions while increasing the added values for security applications...

  3. Design and test of optoelectronic system of alignment control based on CCD camera

    Science.gov (United States)

    Anisimov, A. G.; Gorbachyov, A. A.; Krasnyashchikh, A. V.; Pantushin, A. N.; Timofeev, A. N.

    2008-10-01

    In this work, design, implementation and test of a system intended for positioning of the elements of turbine units relative to the line of shaft with high precision, are discussed. A procedure of the conversion of coordinates from the instrument system into the system connected with the practical position of the axis of turbine has been devised. It is shown that optoelectronic systems of aligment built by autoreflexive scheme can be used for high precision measurements.

  4. Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras

    Science.gov (United States)

    Xu, Yiliang

    2011-01-01

    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …

  5. Finite source sizes and the information content of macho-type lens search light curves

    Science.gov (United States)

    Nemiroff, Robert J.; Wickramasinghe, W. A. D. T.

    1994-01-01

    If the dark halo matter is primarily composed of Massive Compact Halo Objects (MACHOs) toward the lower end of the possible detection range (less than 10(exp -3) solar mass) a fraction of the lens detection events should involve the lens crossing directly in front of the disk of the background star. Previously, Nemiroff has shown that each crossing would create an inflection point in the light curve of the MACHO event. Such inflection points would allow a measure of the time it took for the lens to cross the stellar disk. Given an independent estimate of the stellar radius by other methods, one could then obtain a more accurate estimate of the velocity of the lens. This velocity could then, in turn, be used to obtain a more accurate estimate of the mass range for the MACHO or disk star doing the lensing.

  6. The MACHO Project HST Follow-Up: The Large Magellanic Cloud Microlensing Source Stars

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, C.A.; /LLNL, Livermore /UC, Berkeley; Drake, A.J.; /Caltech; Cook, K.H.; /LLNL, Livermore /UC, Berkeley; Bennett, D.P.; /Caltech /Notre Dame U.; Popowski, P.; /Garching, Max Planck Inst.; Dalal, N.; /Toronto U.; Nikolaev, S.; /LLNL, Livermore; Alcock, C.; /Caltech /Harvard-Smithsonian Ctr. Astrophys.; Axelrod, T.S.; /Arizona U.; Becker, A.C. /Washington U., Seattle; Freeman, K.C.; /Res. Sch. Astron. Astrophys., Weston Creek; Geha, M.; /Yale U.; Griest, K.; /UC, San Diego; Keller, S.C.; /LLNL, Livermore; Lehner, M.J.; /Harvard-Smithsonian Ctr. Astrophys. /Taipei, Inst. Astron. Astrophys.; Marshall, S.L.; /SLAC; Minniti, D.; /Rio de Janeiro, Pont. U. Catol. /Vatican Astron. Observ.; Pratt, M.R.; /Aradigm, Hayward; Quinn, P.J.; /Western Australia U.; Stubbs, C.W.; /UC, Berkeley /Harvard U.; Sutherland, W.; /Oxford U. /Oran, Sci. Tech. U. /Garching, Max Planck Inst. /McMaster U.

    2009-06-25

    We present Hubble Space Telescope (HST) WFPC2 photometry of 13 microlensed source stars from the 5.7 year Large Magellanic Cloud (LMC) survey conducted by the MACHO Project. The microlensing source stars are identified by deriving accurate centroids in the ground-based MACHO images using difference image analysis (DIA) and then transforming the DIA coordinates to the HST frame. None of these sources is coincident with a background galaxy, which rules out the possibility that the MACHO LMC microlensing sample is contaminated with misidentified supernovae or AGN in galaxies behind the LMC. This supports the conclusion that the MACHO LMC microlensing sample has only a small amount of contamination due to non-microlensing forms of variability. We compare the WFPC2 source star magnitudes with the lensed flux predictions derived from microlensing fits to the light curve data. In most cases the source star brightness is accurately predicted. Finally, we develop a statistic which constrains the location of the Large Magellanic Cloud (LMC) microlensing source stars with respect to the distributions of stars and dust in the LMC and compare this to the predictions of various models of LMC microlensing. This test excludes at {approx}> 90% confidence level models where more than 80% of the source stars lie behind the LMC. Exotic models that attempt to explain the excess LMC microlensing optical depth seen by MACHO with a population of background sources are disfavored or excluded by this test. Models in which most of the lenses reside in a halo or spheroid distribution associated with either the Milky Way or the LMC are consistent which these data, but LMC halo or spheroid models are favored by the combined MACHO and EROS microlensing results.

  7. Planetcam: A Visible And Near Infrared Lucky-imaging Camera To Study Planetary Atmospheres And Solar System Objects

    Science.gov (United States)

    Sanchez-Lavega, Agustin; Rojas, J.; Hueso, R.; Perez-Hoyos, S.; de Bilbao, L.; Murga, G.; Ariño, J.; Mendikoa, I.

    2012-10-01

    PlanetCam is a two-channel fast-acquisition and low-noise camera designed for a multispectral study of the atmospheres of the planets (Venus, Mars, Jupiter, Saturn, Uranus and Neptune) and the satellite Titan at high temporal and spatial resolutions simultaneously invisible (0.4-1 μm) and NIR (1-2.5 μm) channels. This is accomplished by means of a dichroic beam splitter that separates both beams directing them into two different detectors. Each detector has filter wheels corresponding to the characteristic absorption bands of each planetary atmosphere. Images are acquired and processed using the “lucky imaging” technique in which several thousand images of the same object are obtained in a short time interval, coregistered and ordered in terms of image quality to reconstruct a high-resolution ideally diffraction limited image of the object. Those images will be also calibrated in terms of intensity and absolute reflectivity. The camera will be tested at the 50.2 cm telescope of the Aula EspaZio Gela (Bilbao) and then commissioned at the 1.05 m at Pic-duMidi Observatory (Franca) and at the 1.23 m telescope at Calar Alto Observatory in Spain. Among the initially planned research targets are: (1) The vertical structure of the clouds and hazes in the planets and their scales of variability; (2) The meteorology, dynamics and global winds and their scales of variability in the planets. PlanetCam is also expected to perform studies of other Solar System and astrophysical objects. Acknowledgments: This work was supported by the Spanish MICIIN project AYA2009-10701 with FEDER funds, by Grupos Gobierno Vasco IT-464-07 and by Universidad País Vasco UPV/EHU through program UFI11/55.

  8. Efecto del virus de artritis encefalitis caprina en el aparato reproductor de machos caprinos

    OpenAIRE

    Humberto Alejandro Martínez Rodríguez; Hugo Ramírez Álvarez; Jorge Tórtora Pérez; Álvaro Aguilar Setién; Germán Isauro Garrido Fariña; Juan Antonio Montaraz Crespo

    2005-01-01

    Se evaluó el efecto del virus de artritis encefalitis caprina (AEC) en el aparato reproductor de machos caprinos. Catorce machos se dividieron en 4 grupos de estudio: I) testigos no infectados (n=3), II) infectados experimentalmente con la cepa FES-C.UNAM (n=3), III) naturalmente infectados (n=5) y IV) infectados experimentalmente con una cepa de referencia de la colección americana de tipos de cultivo (ATCC) (n=3). Cada 30 días durante todo el experimento (10 meses), se colectó sangre y s...

  9. Comportamiento agresivo del macho del ratón de los volcanes Neotomodon alstoni (Rodentia: Cricetidae)

    OpenAIRE

    Granados, Humberto; Luis, Juana; Agustín CARMONA; Espinosa, Guillermo; Arenas, Teresa

    2015-01-01

    Se estudió el comportamiento agresivo de los machos del ratón de los volcanes, Neotomodon alstoni, con 50 pares de ratones que fueron clasificados como posibles dominantes (O) y subordinados (S) a través del Método de Melzack- Thompson. La agresividad se registró en las combinaciones: O vs. O y S vs. S. Se formaron dos grupos: Grupo 1 con 12 pares de machos O y 13 S, y Grupo Il con JI O y 14 S. En el Grupo I el nivel de agresividad se cuantificó después de una semana de apareamiento y después...

  10. A low-cost single-camera imaging system for aerial applicators

    Science.gov (United States)

    Agricultural aircraft provide a readily available and versatile platform for airborne remote sensing. Although various airborne imaging systems are available, most of these systems are either too expensive or too complex to be of practical use for aerial applicators. The objective of this study was ...

  11. A Mobile Mapping System for Road Data Capture via A Single Camera

    OpenAIRE

    Gontran, Hervé; Skaloud, Jan; Gilliéron, Pierre-Yves

    2003-01-01

    The development or road telematics requires the management of continuously growing road databases. Mobile mapping systems can acquire this information, while offering an unbeatable productivity with the combination of navigation and videogrammetry tools.

  12. A dual-beam dual-camera method for a battery-powered underwater miniature PIV (UWMPIV) system

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Binbin; Liao, Qian [University of Wisconsin-Milwaukee, Department of Civil Engineering and Mechanics, Milwaukee, WI (United States); Bootsma, Harvey A. [University of Wisconsin-Milwaukee, School of Freshwater Sciences, Milwaukee, WI (United States); Wang, Pei-Fang [Space and Naval Warfare Systems Center, Advanced Systems and Applied Sciences, Envrionmental Sciences, San Diego, CA (United States)

    2012-06-15

    A battery-powered in situ Underwater Miniature PIV (UWMPIV) has been developed and deployed for field studies. Instead of generating high-energy laser pulses as in a conventional PIV system, the UWMPIV employs a low-power Continuous Wave (CW) laser (class IIIb) and an oscillating mirror (galvanometer) to generate laser sheets. In a previous version of the UWMPIV, the time between exposures of a pair of particle images, {delta}t, could not be reduced without loss of illumination strength. This limitation makes it unsuitable for high-speed flows. In this paper, we present a technique to solve this problem by adopting two CW lasers with different wavelength and two CCD cameras in a second-generation UWMPIV system. Several issues including optical alignment, non-uniform distribution of {delta}t due to the varying speed of the scanning beam and local flow velocities are discussed. The timing issue is solved through a simple calibration procedure that involves the reconstruction of maps of laser beam arrival time. Comparison of the performance between the new method and a conventional PIV system is presented. Measurements were performed in a laboratory open-channel flume. Excellent agreement was found between the new method and the standard PIV measurement in terms of the extracted vertical profiles of mean velocity, RMS fluctuation, Reynolds stress and dissipation rate of turbulent kinetic energy. (orig.)

  13. Real-time camera-based face detection using a modified LAMSTAR neural network system

    Science.gov (United States)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  14. Development of a real-time data processing system for a prototype of the Tomo-e Gozen wide field CMOS camera

    Science.gov (United States)

    Ohsawa, Ryou; Sako, Shigeyuki; Takahashi, Hidenori; Kikuchi, Yuki; Doi, Mamoru; Kobayashi, Naoto; Aoki, Tsutomu; Arimatsu, Ko; Ichiki, Makoto; Ikeda, Shiro; Ita, Yoshifusa; Kasuga, Toshihiro; Kawakita, Hideo; Kokubo, Mitsuru; Maehara, Hiroyuki; Matsunaga, Noriyuki; Mito, Hiroyuki; Mitsuda, Kazuma; Miyata, Takashi; Mori, Kiyoshi; Mori, Yuki; Morii, Mikio; Morokuma, Tomoki; Motohara, Kentaro; Nakada, Yoshikazu; Okumura, Shin-ichiro; Onozato, Hiroki; Osawa, Kentaro; Sarugaku, Yuki; Sato, Mikiya; Shigeyama, Toshikazu; Soyano, Takao; Tanaka, Masaomi; Taniguchi, Yuki; Tanikawa, Ataru; Tarusawa, Ken'ichi; Tominaga, Nozomu; Totani, Tomonori; Urakawa, Seitaro; Usui, Fumihiko; Watanabe, Junichi; Yamaguchi, Jumpei; Yoshikawa, Makoto

    2016-08-01

    The Tomo-e Gozen camera is a next-generation, extremely wide field optical camera, equipped with 84 CMOS sensors. The camera records about a 20 square degree area at 2 Hz, providing "astronomical movie data". We have developed a prototype of the Tomo-e Gozen camera (hereafter, Tomo-e PM), to evaluate the basic design of the Tomo-e Gozen camera. Tomo-e PM, equipped with 8 CMOS sensors, can capture a 2 square degree area at up to 2 Hz. Each CMOS sensor has about 2.6 M pixels. The data rate of Tomo-e PM is about 80 MB/s, corresponding to about 280 GB/hour. We have developed an operating system and reduction softwares to handle such a large amount of data. Tomo-e PM was mounted on 1.0-m Schmidt Telescope in Kiso Observatory at the University of Tokyo. Experimental observations were carried out in the winter of 2015 and the spring of 2016. The observations and software implementation were successfully completed. The data reduction is now in execution.

  15. 76 FR 15306 - Macho Springs Power I, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Science.gov (United States)

    2011-03-21

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Macho Springs Power I, LLC; Supplemental Notice That Initial Market-Based... above-referenced proceeding of Macho Springs Power I, LLC's application for market-based rate...

  16. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  17. Data Acquisition and Image Reconstruction Systems from the miniPET Scanners to the CARDIOTOM Camera

    Science.gov (United States)

    Valastván, I.; Imrek, J.; Hegyesi, G.; Molnár, J.; Novák, D.; Bone, D.; Kerek, A.

    2007-11-01

    Nuclear imaging devices play an important role in medical diagnosis as well as drug research. The first and second generation data acquisition systems and the image reconstruction library developed provide a unified hardware and software platform for the miniPET-I, miniPET-II small animal PET scanners and for the CARDIOTOM™.

  18. A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera.

    Science.gov (United States)

    Ar, Ilktan; Akgul, Yusuf Sinan

    2014-11-01

    Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained.

  19. Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy

    Science.gov (United States)

    Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, legal representative, Alicia (Inventor); Gursel, Yekta (Inventor)

    2012-01-01

    An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.

  20. The detective quantum efficiency (DQE) for evaluating the performance of a small gamma camera system with a uniformly redundant array (URA) collimator

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Hosang [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Cho, Gyuseong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of)], E-mail: gscho@kaist.ac.kr

    2008-06-11

    Presently, the gamma scintillation camera is widely used in various industrial, environmental and medical diagnostic fields. Hence, the objective and quantitative evaluation of its imaging performance is required for imaging quality assurance and excessive exposures prevention. In this study, the detective quantum efficiency (DQE) of a small gamma camera with three pinholes (1, 2, and 4 mm in hole diameter) and one-coded aperture (uniformly redundant array (URA); 286 holes, hole diameter 2 mm) collimator was determined using modulation transfer function (MTF), normalized noise power spectrum (NNPS), and incoming signal-to-noise ratio (SNR). We found that the resolution of URA was a little lower than that of the pinhole, 2 mm in hole diameter, and the noise of URA was prominently lower than that of all pinholes. Thus, the DQE of URA was higher than that of the pinhole. We found that the determination of DQE could be a quantitative and effective method to evaluate gamma camera systems performances.

  1. Comparison of a three-dimensional and two-dimensional camera system for automated measurement of back posture in dairy cows

    NARCIS (Netherlands)

    Viazzi, S.; Bahr, C.; Hertem, van T.; Schlageter-Tello, A.; Romanini, C.E.B.; Halachmi, I.; Lokhorst, C.; Berckmans, D.

    2014-01-01

    In this study, two different computer vision techniques to automatically measure the back posture in dairy cows were tested and evaluated. A two-dimensional and a three-dimensional camera system were used to extract the back posture from walking cows, which is one measurement used by experts to

  2. Low-cost vehicle-mounted enhanced vision system comprised of a laser illuminator and range-gated camera

    Science.gov (United States)

    Pencikowski, Paul S.

    1996-05-01

    Considerable research has been done regarding the use of enhanced vision as a means to enable a vehicle operator to `see' through bad weather or obscuration such as smoke and dust. This research has generally emphasized Forward-looking infra-red (Flir) and millimeter-wave (radar) technologies. Flir is an acceptable approach if modest performance is all that is required. Millimeter wave radar has distinct advantages over Flir in certain cases, but generally requires operator training to interpret various display-screen presentations. The Northrop Grumman Corporation has begun a major sensor-development program to develop a prototype (eye-safe) laser-illuminator/range-gated camera system. The near-term goal is to field a system that would deliver a minimum of 3000 foot penetration of worst-case fog/obscurant. This image would appear on a display as a high resolution monochromatic image. This paper will explore the concept, the proposed automotive application, and the projected cost.

  3. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  4. Key Technology of Camera System for Ultra-small UAV%超小型无人机相机系统关键技术研究

    Institute of Scientific and Technical Information of China (English)

    刘仲宇; 张涛; 李嘉全; 李明; 丁策

    2013-01-01

    Based on the limited types of Ultra-small Unmanned Aerial Vehicles (USUAV) payload, a set of camera system is designed by integration of optical, mechanical, and electrical hardware to reach the requirements of good performance and practicability. The performance of the system is analyzed, and the key technology that the commercial digital camera is applied to aircraft equipment is solved. By utilizing system damping technology, the image does not exist blurring. Image motion of camera system during exposure time does not exceed 0.5 pixels, and there is no smear to occur. Through the thermal design of the system, the problem that the camera cannot be started up in the low temperature environment is solved. The camera system is tested in the ultra-small UAV, and clear images are acquired. It is proved that the design is reasonable.%  针对超小型无人机有效载荷种类有限,利用系统集成的方法,设计了一套实用的超小型无人机相机系统.分析了系统的性能指标,解决了商业级数码相机应用于航空机载设备的关键技术.利用系统减振技术,图像不存在模糊现象;相机系统在曝光期间的像移不超过0.5像元,不存在拖影问题.通过相机系统热设计,解决了相机低温不启动的问题.相机系统在超小型无人机上进行了搭载飞行试验,图像清晰,表明方案设计合理.

  5. The mass function of primordial rogue planet MACHOs in quasar nano-lensing

    NARCIS (Netherlands)

    Schild, R.E; Nieuwenhuizen, T.M.; Gibson, C.H.

    2012-01-01

    The recent Sumi et al (2010 Astrophys. J. 710 1641; 2011 Nature 473 349) detection of free roaming planet mass MACHOs in cosmologically significant numbers recalls their original detection in quasar microlening studies (Colley and Schild 2003 Astrophys. J. 594 97; Schild R E 1996 Astrophys. J. 464

  6. The MACHO project 2nd year LMC microlensing results and dark matter implications

    CERN Document Server

    Pratt, M R; Allsman, R A; Alves, D R; Axelrod, T S; Becker, A C; Bennett, D P; Cook, K H; Freeman, K C; Griest, K; Guern, J A; Lehner, M J; Marshall, S L; Peterson, B A; Quinn, P J; Rodgers, A W; Stubbs, C W; Sutherland, W; Welch, D L

    1996-01-01

    The MACHO Project is searching for galactic dark matter in the form of massive compact halo objects (Machos). Millions of stars in the Large Magellanic Cloud (LMC), Small Magellanic Cloud (SMC), and Galactic bulge are photometrically monitored in an attempt to detect rare gravitational microlensing events caused by otherwise invisible Machos. Analysis of two years of photometry on 8.5 million stars in the LMC reveals 8 candidate microlensing events, far more than the \\sim1 event expected from lensing by low-mass stars in known galactic populations. From these eight events we estimate the optical depth towards the LMC from events with 2 < \\that < 200 days to be \\tau_2^{200} \\approx 2.9 ^{+1.4}_{-0.9} \\ten{-7}. This exceeds the optical depth of 0.5\\ten{-7} expected from known stars and is to be compared with an optical depth of 4.7\\ten{-7} predicted for a ``standard'' halo composed entirely of Machos. The total mass in this lensing population is \\approx 2^{+1.2}_{-0.7} \\ten{11} \\msun (within 50 kpc from t...

  7. The Star Formation Camera

    CERN Document Server

    Scowen, Paul A; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah; Rhoads, James; Roberge, Aki; Siegmund, Oswald; Shaklan, Stuart; Smith, Nathan; Stern, Daniel; Tumlinson, Jason; Windhorst, Rogier; Woodruff, Robert

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and their planetary systems, and to investigate and understand the range of environments, feedback mechanisms, and other factors that most affect the outcome of the star and planet formation process. This program addresses the origins and evolution of stars, galaxies, and cosmic structure and has direct relevance for the formation and survival of planetary systems like our Solar System and planets like Earth. We present the design and performance specifications resulting from the implementation study of the camera, conducted ...

  8. A Kinect(™) camera based navigation system for percutaneous abdominal puncture.

    Science.gov (United States)

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-07

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect(™) was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect(™). For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect(™) depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator's skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect(™) for Windows version 2 (Kinect(™) V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator's skill and trajectory are observed. Additionally, a Kinect(™) for Windows version 1 (Kinect(™) V1) was tested with 12 insertions, and the TRE evaluated with the Kinect(™) V1 is statistically significantly larger than that with the Kinect(™) V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is

  9. 3D monitoring and quality control using intraoral optical camera systems.

    Science.gov (United States)

    Mehl, A; Koch, R; Zaruba, M; Ender, A

    2013-01-01

    The quality of intraoral scanning systems is steadily improving, and they are becoming easier and more reliable to operate. This opens up possibilities for routine clinical applications. A special aspect is that overlaying (superimposing) situations recorded at different times facilitates an accurate three-dimensional difference analysis. Such difference analyses can also be used to advantage in other areas of dentistry where target/actual comparisons are required. This article presents potential indications using a newly developed software, explaining the functionality of the evaluation process and the prerequisites and limitations of 3D monitoring.

  10. Design of an Active Multispectral SWIR Camera System for Skin Detection and Face Verification

    Directory of Open Access Journals (Sweden)

    Holger Steiner

    2016-01-01

    Full Text Available Biometric face recognition is becoming more frequently used in different application scenarios. However, spoofing attacks with facial disguises are still a serious problem for state of the art face recognition algorithms. This work proposes an approach to face verification based on spectral signatures of material surfaces in the short wave infrared (SWIR range. They allow distinguishing authentic human skin reliably from other materials, independent of the skin type. We present the design of an active SWIR imaging system that acquires four-band multispectral image stacks in real-time. The system uses pulsed small band illumination, which allows for fast image acquisition and high spectral resolution and renders it widely independent of ambient light. After extracting the spectral signatures from the acquired images, detected faces can be verified or rejected by classifying the material as “skin” or “no-skin.” The approach is extensively evaluated with respect to both acquisition and classification performance. In addition, we present a database containing RGB and multispectral SWIR face images, as well as spectrometer measurements of a variety of subjects, which is used to evaluate our approach and will be made available to the research community by the time this work is published.

  11. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    Science.gov (United States)

    Majewski, Stanislaw; Umeno, Marc M.

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  12. [Cinematography of ocular fundus with a jointed optical system and tv or cine-camera (author's transl)].

    Science.gov (United States)

    Kampik, A; Rapp, J

    1979-02-01

    A method of Cinematography of the ocular fundus is introduced which--by connecting a camera with an indirect ophthalmoscop--allows to record the monocular picture of the fundus as produced by the ophthalmic lens.

  13. A Proposal and Implement of Detection and Reconstruction Method of Contact Shape with Horizon View Camera for Calligraphy Education Support System

    Science.gov (United States)

    Tobitani, Kensuke; Yamamoto, Kazuhiko; Kato, Kunihito

    In this study, we are concerned with calligraphy education support system. In current calligraphy education in Japan, teachers evaluate character written by students and they teach correct writing process based on the evaluation of the written character. Professionals in calligraphy can estimate writing process of character and balance of character which are important points for evaluation of character by estimating movement of contact shape (contact faces with paper and brush). But it takes a lot of time for students to be able to learn how to write correct character in this education way. If teachers and students can know movement of the contact shape, calligraphy education will be more efficient. However, it is difficult to detect contact shape from an images captured by cameras set in general angle. Because brush and ink are black either. So, contact shape is hided under the brush. In this paper, we propose new camera system consists of four Horizon View Cameras (HVC) which are special camera setting to detect and reconstruct contact shape, experiment with this system, and compare movement of contact shape of professionals and amateurs.

  14. Area X-ray or UV camera system for high-intensity beams

    Science.gov (United States)

    Chapman, Henry N.; Bajt, Sasa; Spiller, Eberhard A.; Hau-Riege, Stefan , Marchesini, Stefano

    2010-03-02

    A system in one embodiment includes a source for directing a beam of radiation at a sample; a multilayer mirror having a face oriented at an angle of less than 90 degrees from an axis of the beam from the source, the mirror reflecting at least a portion of the radiation after the beam encounters a sample; and a pixellated detector for detecting radiation reflected by the mirror. A method in a further embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample; not reflecting at least a majority of the radiation that is not diffracted by the sample; and detecting at least some of the reflected radiation. A method in yet another embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample using a multilayer mirror; and detecting at least some of the reflected radiation.

  15. 128 x 128 MWIR InSb focal plane and camera system

    Science.gov (United States)

    Parrish, William J.; Blackwell, John D.; Paulson, Robert C.; Arnold, Harold

    1991-09-01

    The need for increased resolution and sensitivity in IR systems applications has provided the impetus for the development of high-performance second-generation staring focal plane array technology. Previously, the availability of these focal plane array components has been limited and the costs associated with delivery of useful hardware have been high. Utilizing proven InSb detector technology and foundry silicon CMOS processes, a high performance, affordable hybrid focal plane array and support electronics system has been developed. The 128 X 128 array of photovoltac InSb detectors on 50 micrometers centers is interfaced with the silicon readout by aligning and cold welding indium bumps on each detector with the corresponding indium bump on the silicon readout. The detector is then thinned so that it can be illuminated through the backside. The 128 X 128 channel signal processing integrated circuit performs the function of interfacing with the detectors, integrating the detector current, and multiplexing the signals. It is fabricated using a standard double poly, single metal, p-well CMOS process. The detector elements achieve a high quantum efficiency response from less than 1 micrometers to greater than 5 micrometers with an optical fill factor of 90%. The hybrid focal plane array can operate to a maximum frame rate of 1,000 Hz. D* values at 1.7 X 1014 photons/cm2/sec illumination conditions approach the BLIP value of 9.4 X 1011 cm(root)Hz/W with a capacity of 4 X 107 carriers and a dynamic range of greater than 60,000. A NE(Delta) T value of .018 C and a MRT value of .020 C have been measured. The devices operate with only 3 biases and 3 clocks.

  16. Long-Term Tracking of a Specific Vehicle Using Airborne Optical Camera Systems

    Science.gov (United States)

    Kurz, F.; Rosenbaum, D.; Runge, H.; Cerra, D.; Mattyus, G.; Reinartz, P.

    2016-06-01

    In this paper we present two low cost, airborne sensor systems capable of long-term vehicle tracking. Based on the properties of the sensors, a method for automatic real-time, long-term tracking of individual vehicles is presented. This combines the detection and tracking of the vehicle in low frame rate image sequences and applies the lagged Cell Transmission Model (CTM) to handle longer tracking outages occurring in complex traffic situations, e.g. tunnels. The CTM model uses the traffic conditions in the proximities of the target vehicle and estimates its motion to predict the position where it reappears. The method is validated on an airborne image sequence acquired from a helicopter. Several reference vehicles are tracked within a range of 500m in a complex urban traffic situation. An artificial tracking outage of 240m is simulated, which is handled by the CTM. For this, all the vehicles in the close proximity are automatically detected and tracked to estimate the basic density-flow relations of the CTM model. Finally, the real and simulated trajectories of the reference vehicles in the outage are compared showing good correspondence also in congested traffic situations.

  17. Charon's light curves, as observed by New Horizons' Ralph color camera (MVIC) on approach to the Pluto system

    Science.gov (United States)

    Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; Young, L. A.; Stern, S. A.

    2017-05-01

    Light curves produced from color observations taken during New Horizons' approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5° to 15.1°, sub-observer latitude of 51.2 °N to 51.5 °N, and a sub-solar latitude of 41.2°N. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.

  18. Task analysis of laparoscopic camera control schemes.

    Science.gov (United States)

    Ellis, R Darin; Munaco, Anthony J; Reisner, Luke A; Klein, Michael D; Composto, Anthony M; Pandya, Abhilash K; King, Brady W

    2016-12-01

    Minimally invasive surgeries rely on laparoscopic camera views to guide the procedure. Traditionally, an expert surgical assistant operates the camera. In some cases, a robotic system is used to help position the camera, but the surgeon is required to direct all movements of the system. Some prior research has focused on developing automated robotic camera control systems, but that work has been limited to rudimentary control schemes due to a lack of understanding of how the camera should be moved for different surgical tasks. This research used task analysis with a sample of eight expert surgeons to discover and document several salient methods of camera control and their related task contexts. Desired camera placements and behaviours were established for two common surgical subtasks (suturing and knot tying). The results can be used to develop better robotic control algorithms that will be more responsive to surgeons' needs. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. 摄像机漫游演示系统的设计%The Design of Camera Roaming Demonstration System

    Institute of Scientific and Technical Information of China (English)

    赵利; 王文博

    2013-01-01

    OpenGL(Open Graphics Library),which is a kind of connector standard defining a cross-program language and cross-platform program. It is applied in three-dimensional images. OpenGL is a professional and powerful program connector for images,and a graphics library at the bottom which is very convenient to be used. Based on visual studio 2008 to develop the platform ,the system was set up by the technologies in graphics ,such as three-dimensioanl model, focus change ,grain reflection and real drawing have been applied to achieve the effect of virtual reality in the system. Besides,the mouse and keyboard have been used in the system to realise the switch of camera lens to make the users feel real.%OpenGL(全写Open Graphics Library)是个定义了一个跨编程语言、跨平台的编程接口的规格,它用于三维图象。本文利用Visual Studio 2008开发平台和OpenGL实现。通过图形学中的三维建模、视点变换、纹理映射、真实感绘制等技术达到虚拟现实的效果,并通过鼠标,键盘的人机互联实现了摄像机镜头的转换,从而让使用者感受到身临其境的效果。

  20. TransCAIP: A Live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters.

    Science.gov (United States)

    Taguchi, Yuichi; Koike, Takafumi; Takahashi, Keita; Naemura, Takeshi

    2009-01-01

    The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.

  1. Design and Calibration of Double Lens 3D Camera System%双镜头3D摄像系统的设计与标定

    Institute of Scientific and Technical Information of China (English)

    梁发云; 何小明; 尤鹏飞; 王婧; 陈志文; 帖志成

    2013-01-01

    Stereo depth and parallax are closely related with the intrinsic parameters and the relation of mutual positions in 3D camera system.In this paper,the combined 3D camera system is designed on the base of the mathematical model of double lens 3D camera system and a new method of stereo calibration based on the imaging chessboard is proposed.It directly adopts LCD monitor to produce dynamic calibration target.Intrinsic and external parameters are calculated with calibration toolbox,and parameters are used to prove the rationality of the design of combined 3D camera system structure.The experimental results show that this calibration method not only acquires high precision,but also its process is convenient and fast,and can be applied to 3D camera system.The combined 3D camera system after calibration is in line with the human eye stereo vision.%3D摄像系统的内参数以及相互位置关系与视差、立体深度密切相关.在双镜头3D摄像系统数学模型的基础上设计了组合式3D摄像系统,提出了一种基于显像棋盘作为标定靶面的新方法,直接利用液晶显示器生成动态标定靶面,运用标定工具箱对双摄像头内外参数进行标定,并利用标定的参数验证组合式3D摄像系统结构设计的合理性.实验表明,该标定方法方便、快捷,操作过程简单,取得了较高的精度,适合3D摄像系统的标定,标定后的组合式3D摄像系统符合人眼立体视觉.

  2. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  3. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E., E-mail: eduardo.barrera@upm.es [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid (UPM) (Spain); Ruiz, M.; Sanz, D. [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid (UPM) (Spain); Vega, J.; Castro, R. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Juárez, E.; Salvador, R. [Centro de Investigación en Tecnologías Software y Sistemas Multimedia para la Sostenibilidad, Universidad Politécnica de Madrid (UPM) (Spain)

    2014-05-15

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  4. Carcinoma mamario en un canino macho: caracterización clínica e inmunohistoquímica

    OpenAIRE

    JI Arias; Paredes, E.; CG Torres

    2015-01-01

    Las neoplasias mamarias en perros machos son poco comunes, no superando el 2% de la totalidad de casos de tumores mamarios en machos y hembras. Estos han mostrado ser en su mayoría de baja malignidad y positivos a la presencia del receptor de estradiol α. En este reporte se presenta un caso de tumor mamario en un perro mestizo macho, que fue evaluado clínicamente, resuelto quirúrgicamente y estudiado citológica, histológica e inmunohistoquimicamente mediante el estudio de proteínas como recep...

  5. Wireless Image Acquisition System of WIFI Lens Camera%WIFI镜头机无线图像采集系统

    Institute of Scientific and Technical Information of China (English)

    龚正; 沈建新

    2016-01-01

    Wired camera has many defects such as poor mobility and less flexible, so the paper presents a combination of the camera module, WIFI module and stm32 chips to form a WIFI wireless lens Camera. Through researching on theory and entity of camera module as well as WIFI module, this method implements wireless image data transmission to the display device processor by WIFI wireless data transmission technology, LWIP protocol and Socket network programming. When the display device with WIFI wireless network card could connect to the camera's internal WIFI hotspot, it could conveniently complete wireless image transmission, display and storage through a series of operations like connecting to the server, turning on the camera, preserving the image and so on.%针对有线摄像头移动性差、不够灵活等缺陷,提出将摄像头模块、WIFI模组和stm32芯片组合到一起形成一个WIFI无线镜头机.对摄像头模块和WIFI模组的原理和实物进行了研究,通过WIFI无线数据传输技术、LWIP协议及Socket网络编程将图像数据信息无线发送到显示设备处理器中.最终实现将带有网卡的显示设备连接到WIFI无线镜头机内部的WIFI热点,通过连接服务器、开启相机、保存图片等一系列的操作,方便快捷的完成无线图像传输、显示和存储的工作.

  6. Status of the FACT camera

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Quirin [ETH Zurich, Institute for Particle Physics, 8093 Zurich (Switzerland); Collaboration: FACT-Collaboration

    2011-07-01

    The First G-APD Cherenkov Telescope (FACT) project develops a novel camera type for very high energy gamma-ray astronomy. A total of 1440 Geiger-mode avalanche photodiodes (G-APD) are used for light detection, each accompanied by a solid light concentrator. All electronics for analog signal processing, digitization and triggering are fully integrated into the camera body. The event data are sent via Ethernet to the counting house. In order to compensate for gain variations of the G-APDs an online feedback system analyzing calibration light pulses is employed. Once the construction and commissioning of the camera is finished it will be transported to La Palma, Canary Islands, and mounted on the refurbished HEGRA CT3 telescope structure. In this talk the architecture and status of the FACT camera is presented.

  7. MACS-Himalaya: A photogrammetric aerial oblique camera system designed for highly accurate 3D-reconstruction and monitoring in steep terrain and under extreme illumination conditions

    Science.gov (United States)

    Brauchle, Joerg; Berger, Ralf; Hein, Daniel; Bucher, Tilman

    2017-04-01

    The DLR Institute of Optical Sensor Systems has developed the MACS-Himalaya, a custom built Modular Aerial Camera System specifically designed for the extreme geometric (steep slopes) and radiometric (high contrast) conditions of high mountain areas. It has an overall field of view of 116° across-track consisting of a nadir and two oblique looking RGB camera heads and a fourth nadir looking near-infrared camera. This design provides the capability to fly along narrow valleys and simultaneously cover ground and steep valley flank topography with similar ground resolution. To compensate for extreme contrasts between fresh snow and dark shadows in high altitudes a High Dynamic Range (HDR) mode was implemented, which typically takes a sequence of 3 images with graded integration times, each covering 12 bit radiometric depth, resulting in a total dynamic range of 15-16 bit. This enables dense image matching and interpretation for sunlit snow and glaciers as well as for dark shaded rock faces in the same scene. Small and lightweight industrial grade camera heads are used and operated at a rate of 3.3 frames per second with 3-step HDR, which is sufficient to achieve a longitudinal overlap of approximately 90% per exposure time at 1,000 m above ground at a velocity of 180 km/h. Direct georeferencing and multitemporal monitoring without the need of ground control points is possible due to the use of a high end GPS/INS system, a stable calibrated inner geometry of the camera heads and a fully photogrammetric workflow at DLR. In 2014 a survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried in a wingpod by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at altitudes up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced in regions and outcrops normally inaccessible to

  8. Design of a control system for ultrafast x-ray camera working in a single photon counting mode

    Science.gov (United States)

    Zoladz, Miroslaw; Rauza, Jacek; Kasinski, Krzysztof; Maj, Piotr; Grybos, Pawel

    2015-09-01

    Prototype of Ultra-Fast X-Ray Camera Controller working in a single photon counting mode and based on ASIC has been presented in this paper. An ASIC architecture has been discussed with special attention to digital part. We present the Custom Soft Processor as an ASIC control sequences generator. The Processor allows for dynamic program downloading and generating control sequences with up to 80MHz clock rate (preliminary results). Assembler with a very simple syntax has been defined to speed up Processor programs development. Discriminators threshold dispersion correction has been performed to confirm proper Camera Controller operation.

  9. Ice crystal characterization in cirrus clouds: a sun-tracking camera system and automated detection algorithm for halo displays

    Science.gov (United States)

    Forster, Linda; Seefeldner, Meinhard; Wiegner, Matthias; Mayer, Bernhard

    2017-07-01

    Halo displays in the sky contain valuable information about ice crystal shape and orientation: e.g., the 22° halo is produced by randomly oriented hexagonal prisms while parhelia (sundogs) indicate oriented plates. HaloCam, a novel sun-tracking camera system for the automated observation of halo displays is presented. An initial visual evaluation of the frequency of halo displays for the ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques) field campaign from October to mid-November 2014 showed that sundogs were observed more often than 22° halos. Thus, the majority of halo displays was produced by oriented ice crystals. During the campaign about 27 % of the cirrus clouds produced 22° halos, sundogs or upper tangent arcs. To evaluate the HaloCam observations collected from regular measurements in Munich between January 2014 and June 2016, an automated detection algorithm for 22° halos was developed, which can be extended to other halo types as well. This algorithm detected 22° halos about 2 % of the time for this dataset. The frequency of cirrus clouds during this time period was estimated by co-located ceilometer measurements using temperature thresholds of the cloud base. About 25 % of the detected cirrus clouds occurred together with a 22° halo, which implies that these clouds contained a certain fraction of smooth, hexagonal ice crystals. HaloCam observations complemented by radiative transfer simulations and measurements of aerosol and cirrus cloud optical thickness (AOT and COT) provide a possibility to retrieve more detailed information about ice crystal roughness. This paper demonstrates the feasibility of a completely automated method to collect and evaluate a long-term database of halo observations and shows the potential to characterize ice crystal properties.

  10. Ice crystal characterization in cirrus clouds: a sun-tracking camera system and automated detection algorithm for halo displays

    Directory of Open Access Journals (Sweden)

    L. Forster

    2017-07-01

    Full Text Available Halo displays in the sky contain valuable information about ice crystal shape and orientation: e.g., the 22° halo is produced by randomly oriented hexagonal prisms while parhelia (sundogs indicate oriented plates. HaloCam, a novel sun-tracking camera system for the automated observation of halo displays is presented. An initial visual evaluation of the frequency of halo displays for the ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques field campaign from October to mid-November 2014 showed that sundogs were observed more often than 22° halos. Thus, the majority of halo displays was produced by oriented ice crystals. During the campaign about 27 % of the cirrus clouds produced 22° halos, sundogs or upper tangent arcs. To evaluate the HaloCam observations collected from regular measurements in Munich between January 2014 and June 2016, an automated detection algorithm for 22° halos was developed, which can be extended to other halo types as well. This algorithm detected 22° halos about 2 % of the time for this dataset. The frequency of cirrus clouds during this time period was estimated by co-located ceilometer measurements using temperature thresholds of the cloud base. About 25 % of the detected cirrus clouds occurred together with a 22° halo, which implies that these clouds contained a certain fraction of smooth, hexagonal ice crystals. HaloCam observations complemented by radiative transfer simulations and measurements of aerosol and cirrus cloud optical thickness (AOT and COT provide a possibility to retrieve more detailed information about ice crystal roughness. This paper demonstrates the feasibility of a completely automated method to collect and evaluate a long-term database of halo observations and shows the potential to characterize ice crystal properties.

  11. Gigavision - A weatherproof, multibillion pixel resolution time-lapse camera system for recording and tracking phenology in every plant in a landscape

    Science.gov (United States)

    Brown, T.; Borevitz, J. O.; Zimmermann, C.

    2010-12-01

    We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually

  12. Discovery of Five New R. Coronae Borealis Stars in the MACHO Galactic Bulge Database

    Energy Technology Data Exchange (ETDEWEB)

    Zaniewshi, A; Clayton, G C; Welch, D; Gordon, K D; Minniti, D; Cook, K

    2005-06-16

    We have identified five new R Coronae Borealis (RCB) stars in the Galactic bulge using the MACHO Project photometry database, raising the total number of known Galactic RCB stars to about 40. We have obtained spectra to confirm the identifications. The fact that four out of the five newly identified RCB stars are ''cool'' (T{sub eff} < 6000 K) rather than ''warm'' (T{sub eff} > 6000 K) suggests that the preponderance of warm RCB stars among the existing sample is a selection bias. These cool RCB stars are redder and fainter than their warm counterparts and may have been missed in surveys done with blue plates. Based on the number of new RCB stars discovered in the MACHO bulge fields, there may be {approx}250 RCB stars in the reddened ''exclusion'' zone toward the bulge.

  13. Performance evaluation of a small CZT pixelated semiconductor gamma camera system with a newly designed stack-up parallel-hole collimator

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youngjin [Department of Radiological Science, College of Health Science, Eulji University, 553 Sanseong-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do 461-713 (Korea, Republic of); Kim, Hee-Joung, E-mail: hjk1@yonsei.ac.kr [Department of Radiological Science, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon-do 220-710 (Korea, Republic of)

    2015-09-11

    Gamma ray imaging techniques that use a cadmium zinc telluride (CZT) or cadmium telluride (CdTe) pixelated semiconductor detectors have rapidly gained popularity as a key tool for nuclear medicine research. By using a pinhole collimator with a pixelated semiconductor gamma camera system, better spatial resolution can be achieved. However, this improvement in spatial resolution is accomplished with a decrease in the sensitivity due to the small collimator hole diameter. Furthermore, few studies have been conducted for novel parallel-hole collimator geometric designs with pixelated semiconductor gamma camera systems. A gamma camera system which combines a CZT pixelated semiconductor detector with a newly designed stack-up parallel-hole collimator was developed and evaluated. The eValuator-2500 CZT pixelated semiconductor detector (eV product, Saxonburg, PA) was selected for the gamma camera system. This detector consisted of a row of four CZT crystals of 12.8 mm in length with 3 mm in thickness. The proposed parallel-hole collimator consists of two layers. The upper layer results in a fourfold increase in hole size compared to a matched square hole parallel-hole collimator with an equal hole and pixel size, while the lower layer also consisted of fourfold holes size and pretty acts as a matched square hole parallel-hole collimator. The overlap ratios of these collimators were 1:1, 1:2, 2:1, 1:5, and 5:1. These collimators were mounted on the eValuator-2500 CZT pixelated semiconductor detector. The basic performance of the imaging system was measured for a {sup 57}Co gamma source (122 keV). The measured averages of sensitivity and spatial resolution varied depending on the overlap ratios of the proposed parallel-hole collimator and source-to-collimator distances. One advantage of our system is the use of stacked collimators that can select the best combination of system sensitivity and spatial resolution. With low counts, we can select a high sensitivity collimator

  14. Construction of a Medium-Sized Schwarzschild-Couder Telescope for the Cherenkov Telescope Array: Implementation of the Cherenkov-Camera Data Acquisition System

    CERN Document Server

    Santander, M; Humensky, B; Mukherjee, R

    2015-01-01

    A medium-sized Schwarzchild-Couder Telescope (SCT) is being developed as a possible extension for the Cherenkov Telescope Array (CTA). The Cherenkov camera of the telescope is designed to have 11328 silicon photomultiplier pixels capable of capturing high-resolution images of air showers in the atmosphere. The combination of the large number of pixels and the high trigger rate (> 5 kHz) expected for this telescope results in a multi-Gbps data throughput. This sets challenging requirements on the design and performance of a data acquisition system for processing and storing this data. A prototype SCT (pSCT) with a partial camera containing 1600 pixels, covering a field of view of 2.5 x 2.5 square degrees, is being assembled at the F.L. Whipple Observatory. We present the design and current status of the SCT data acquisition system.

  15. Robo-AO Kitt Peak: status of the system and deployment of a sub-electron readnoise IR camera to detect low-mass companions

    Science.gov (United States)

    Salama, Maïssa; Baranec, Christoph; Jensen-Clem, Rebecca; Riddle, Reed; Duev, Dmitry; Kulkarni, Shrinivas; Law, Nicholas M.

    2016-07-01

    We have started an initial three-year deployment of Robo-AO at the 2.1-m telescope at Kitt Peak, Arizona as of November 2015. We report here on the project status and two new developments with the Robo-AO KP system: the commissioning of a sub-electron readnoise SAPHIRA near-infrared camera, which will allow us to widen the scope of possible targets to low-mass stellar and substellar objects; and, performance analysis and tuning of the adaptive optics system, which will improve the sensitivity to these objects. Commissioning of the near-infrared camera and optimizing the AO performance occur in parallel with ongoing visible-light science programs.

  16. Robo-AO Kitt Peak: Status of the system and deployment of a sub-electron readnoise IR camera to detect low-mass companions

    CERN Document Server

    Salama, Maissa; Jensen-Clem, Rebecca; Riddle, Reed; Duev, Dmitry; Kulkarni, Shrinivas; Law, Nicholas M

    2016-01-01

    We have started an initial three-year deployment of Robo-AO at the 2.1-m telescope at Kitt Peak, Arizona as of November 2015. We report here on the project status and two new developments with the Robo-AO KP system: the commissioning of a sub-electron readnoise SAPHIRA near-infrared camera, which will allow us to widen the scope of possible targets to low-mass stellar and substellar objects; and, performance analysis and tuning of the adaptive optics system, which will improve the sensitivity to these objects. Commissioning of the near-infrared camera and optimizing the AO performance occur in parallel with ongoing visible-light science programs.

  17. Retrato. Nieto de Unamuno con el busto de su abuelo hecho por Victorio Macho.

    OpenAIRE

    Cuesta (Licenciado)

    2010-01-01

    1 fot.; papel; imagen 9 x 14 cm. - Retrato. Nieto de Unamuno con el busto de su abuelo hecho por Victorio Macho. (Imagen troquelada en papel mate con base de color marfil, medida total 16,9 x 22,7. Sello troquelado del autor en ángulo inferior derecho: "Cuesta, Avda. Blasco Ibanez, 7". ). - Procedencia: fondo Miguel de Unamuno. - Buena Conservación.

  18. Performance Evaluations and Quality Validation System for Optical Gas Imaging Cameras That Visualize Fugitive Hydrocarbon Gas Emissions

    Science.gov (United States)

    Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...

  19. Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera

    Directory of Open Access Journals (Sweden)

    Reinhard Koch

    2010-09-01

    Full Text Available For broadcasting purposes mixed reality, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a time-of-flight (TOF-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the graphics processing unit (GPU by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.

  20. The measurement of in vivo joint angles during a squat using a single camera markerless motion capture system as compared to a marker based system.

    Science.gov (United States)

    Schmitz, Anne; Ye, Mao; Boggess, Grant; Shapiro, Robert; Yang, Ruigang; Noehren, Brian

    2015-02-01

    Markerless motion capture may have the potential to make motion capture technology widely clinically practical. However, the ability of a single markerless camera system to quantify clinically relevant, lower extremity joint angles has not been studied in vivo. Therefore, the goal of this study was to compare in vivo joint angles calculated using a marker-based motion capture system and a Microsoft Kinect during a squat. Fifteen individuals participated in the study: 8 male, 7 female, height 1.702±0.089m, mass 67.9±10.4kg, age 24±4 years, BMI 23.4±2.2kg/m(2). Marker trajectories and Kinect depth map data of the leg were collected while each subject performed a slow squat motion. Custom code was used to export virtual marker trajectories for the Kinect data. Each set of marker trajectories was utilized to calculate Cardan knee and hip angles. The patterns of motion were similar between systems with average absolute differences of 0.9 for both systems. The peak angles calculated by the marker-based and Kinect systems were largely correlated (r>0.55). These results suggest the data from the Kinect can be post processed in way that it may be a feasible markerless motion capture system that can be used in the clinic.

  1. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    Science.gov (United States)

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  2. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  3. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  4. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  5. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  6. 浅谈基于深度相机的手势识别系统%Discussion of gesture recognition system based on depth camera

    Institute of Scientific and Technical Information of China (English)

    欧阳宁; 刘文波

    2013-01-01

    With the computer interactive technology developing, hand gesture-as a natural and intuitive interpersonal communication mode, hand gesture recognition is an indispensable key technology in order to implement the new generation of human-computer interaction function. Follow with new type of Depth Camera such as Kinect , TOF and so on have rapidly development, its also promote human to make highly sense with hand gesture recognition system of researches and applications based on Depth Camera. This paper mainly proposed different methods and applications of hand gesture recognition system based on Depth Camera, and then discuss the skeleton information, depth image information dates will play in an important role in developments and applications of hand recognition system, and at last we will introduce the development prospects of the Depth Camera.%  随着计算机人机交互技术的发展,手势作为一种自然而直观的人际交流模式,手势识别也是实现新一代人机交互所不可缺少的一项关键技术。现代深度相机如Kinect、TOF等飞速发展,也促使人们对于基于深度相机的手势识别系统研究和应用的重视。文章主要介绍基于深度相机的手势识别系统方法及应用,讨论骨骼、深度图像等信息数据在识别系统中的开发和应用,最后简介深度信息的应用发展前景。

  7. A MODIFIED PROJECTIVE TRANSFORMATION SCHEME FOR MOSAICKING MULTI-CAMERA IMAGING SYSTEM EQUIPPED ON A LARGE PAYLOAD FIXED-WING UAS

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2015-03-01

    Full Text Available In recent years, Unmanned Aerial System (UAS has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the

  8. a Modified Projective Transformation Scheme for Mosaicking Multi-Camera Imaging System Equipped on a Large Payload Fixed-Wing Uas

    Science.gov (United States)

    Jhan, J. P.; Li, Y. T.; Rau, J. Y.

    2015-03-01

    In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme

  9. Defining ray sets for the analysis of lenslet-based optical systems including plenoptic cameras and Shack-Hartmann wavefront sensors

    Science.gov (United States)

    Moore, Lori

    Plenoptic cameras and Shack-Hartmann wavefront sensors are lenslet-based optical systems that do not form a conventional image. The addition of a lens array into these systems allows for the aberrations generated by the combination of the object and the optical components located prior to the lens array to be measured or corrected with post-processing. This dissertation provides a ray selection method to determine the rays that pass through each lenslet in a lenslet-based system. This first-order, ray trace method is developed for any lenslet-based system with a well-defined fore optic, where in this dissertation the fore optic is all of the optical components located prior to the lens array. For example, in a plenoptic camera the fore optic is a standard camera lens. Because a lens array at any location after the exit pupil of the fore optic is considered in this analysis, it is applicable to both plenoptic cameras and Shack-Hartmann wavefront sensors. Only a generic, unaberrated fore optic is considered, but this dissertation establishes a framework for considering the effect of an aberrated fore optic in lenslet-based systems. The rays from the fore optic that pass through a lenslet placed at any location after the fore optic are determined. This collection of rays is reduced to three rays that describe the entire lenslet ray set. The lenslet ray set is determined at the object, image, and pupil planes of the fore optic. The consideration of the apertures that define the lenslet ray set for an on-axis lenslet leads to three classes of lenslet-based systems. Vignetting of the lenslet rays is considered for off-axis lenslets. Finally, the lenslet ray set is normalized into terms similar to the field and aperture vector used to describe the aberrated wavefront of the fore optic. The analysis in this dissertation is complementary to other first-order models that have been developed for a specific plenoptic camera layout or Shack-Hartmann wavefront sensor application

  10. Spectroscopy of MACHO 97-SMC-1: Self-Lensing within the Small Magellanic Cloud

    Science.gov (United States)

    Sahu, Kailash C.; Sahu, M. S.

    1998-12-01

    More than a dozen microlensing events have been detected so far toward the LMC, and two have been detected toward the SMC. If all of the lenses are in the Galactic halo, both the LMC and the SMC events are expected to have similar timescales. However, the first event toward the SMC, MACHO 97-SMC-1, had a timescale of 123 days, which is much larger than the typical timescale for the LMC events. Since the observed timescale of the SMC event would need the mass of the halo lens to be ~3 Msolar, it has been argued earlier that the lens must be within the SMC, which we spectroscopically confirm in this Letter. From optical depth estimates, we first show that the stars within the SMC play a dominant role as gravitational lenses and can fully account for the observed microlensing events, mainly due to its large physical depth. We also show that if the lenses are within the Magellanic Clouds, then the SMC events should be longer in duration than the LMC events, a fact that is consistent with the observations. The timescale of the event implies that the mass of the lens is >~2 Msolar if it is in the Milky Way disk or halo, in which case the lens, if it is a normal star, is expected to be bright and should reveal itself in the spectrum. Here, we present an optical spectrum of MACHO 97-SMC-1 obtained in 1997 May that shows that the source is a main-sequence B star. There is no trace of any contribution from the lens, which suggests that the lens is not in the Milky Way disk or halo but is a low-mass star within the SMC. The other alternative, that the lens could be a black hole in the Galactic halo, cannot be ruled out from the spectrum alone, but this is disfavored by the timescales of the LMC events. It is worth noting here that MACHO SMC-98-1 is the only other observed event toward the SMC. This was a binary lens event for which the caustic crossing timescale as observed by the PLANET, MACHO, EROS, and OGLE collaborations suggests that the lens is within the SMC

  11. Probabilistic models and numerical calculation of system matrix and sensitivity in list-mode MLEM 3D reconstruction of Compton camera images.

    Science.gov (United States)

    Maxim, Voichita; Lojacono, Xavier; Hilaire, Estelle; Krimmer, Jochen; Testa, Etienne; Dauvergne, Denis; Magnin, Isabelle; Prost, Rémy

    2016-01-01

    This paper addresses the problem of evaluating the system matrix and the sensitivity for iterative reconstruction in Compton camera imaging. Proposed models and numerical calculation strategies are compared through the influence they have on the three-dimensional reconstructed images. The study attempts to address four questions. First, it proposes an analytic model for the system matrix. Second, it suggests a method for its numerical validation with Monte Carlo simulated data. Third, it compares analytical models of the sensitivity factors with Monte Carlo simulated values. Finally, it shows how the system matrix and the sensitivity calculation strategies influence the quality of the reconstructed images.

  12. Hand-Camera Coordination Varies over Time in Users of the Argus® II Retinal Prosthesis System

    Science.gov (United States)

    Barry, Michael P.; Dagnelie, Gislin

    2016-01-01

    Introduction: Most visual neuroprostheses use an external camera for image acquisition. This adds two complications to phosphene perception: (1) stimulation locus will not change with eye movements; and (2) external cameras can be aimed in directions different from the user’s intended direction of gaze. Little is known about the stability of where users perceive light sources to be or whether they will adapt to changes in camera orientation. Methods: Three end-stage retinitis pigmentosa patients implanted with the Argus II participated in this study. This prosthesis stimulated the retina based on an 18° × 11° area selected within the camera’s 66° × 49° field of view. The center of the electrode array’s field of view mapped within the camera’s field of view is the camera alignment position (CAP). Proper camera alignments minimize errors in localizing visual percepts in space. Subjects touched single white squares in random locations on a darkened touchscreen 40 or more times. To study adaptation, subjects were given intentional CAP misalignments of 15–40° for 5–6 months. Subjects performed this test with auditory feedback during (bi-)weekly lab sessions. Misaligned CAPs were maintained for another 5–6 months without auditory feedback. Touch alignment was tracked to detect any adaptation. To estimate localization stability, data for when CAPs were set to minimize errors were tracked. The same localization test as above was used. Localization errors were tracked every 1–2 weeks for up to 40 months. Results: Two of three subjects used auditory feedback to improve accuracy with misaligned CAPs at an average rate of 0.02°/day (p < 0.05, bootstrap analysis of linear regression). The rates observed here were ~4000 times slower than those seen in normally-sighted subjects adapting to prism glasses. Removal of auditory feedback precipitated error increases for all subjects. Optimal CAPs varied significantly across test sessions (p < 10−4, bootstrap

  13. Hand-Camera Coordination Varies over Time in Users of the Argus® II Retinal Prosthesis System

    Directory of Open Access Journals (Sweden)

    Michael P Barry

    2016-05-01

    Full Text Available Introduction:Most visual neuroprostheses use an external camera for image acquisition. This adds two complications to phosphene perception: 1 stimulation locus will not change with eye movements; and 2 external cameras can be aimed in directions different from the user’s intended direction of gaze. Little is known about the stability of where users perceive light sources to be or whether they will adapt to changes in camera orientation. Methods:Three end-stage retinitis pigmentosa patients implanted with the Argus II participated in this study. This prosthesis stimulated the retina based on an 18° x 11° area selected within the camera’s 66° x 49° field of view. The center of the electrode array’s field of view mapped with the camera’s field of view is the camera alignment position (CAP. Proper camera alignments minimize errors in localizing visual percepts in space. Subjects touched single white squares in random locations on a darkened touchscreen 40 or more times. To study adaptation, subjects were given intentional CAP misalignments of 15°–40° for 5–6 months. Subjects performed this test with auditory feedback during (bi-weekly lab sessions. Misaligned CAPs were maintained for another 5–6 months without auditory feedback. Touch alignment was tracked to detect any adaptation. To estimate localization stability, data for when CAPs were set to minimize errors were tracked. The same localization test as above was used. Localization errors were tracked every 1–2 weeks for up to 40 months.Results:Two of three subjects used auditory feedback to improve accuracy with misaligned CAPs at an average rate of 0.02°/day (p < 0.05, bootstrap analysis of linear regression. The rates observed here were ~4000 times slower than those seen in normally-sighted subjects adapting to prism glasses. Removal of auditory feedback precipitated error increases for all subjects.Optimal CAPs varied significantly across test sessions (p < 10−4

  14. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  15. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  16. Development of a safe ultraviolet camera system to enhance awareness by showing effects of UV radiation and UV protection of the skin (Conference Presentation)

    Science.gov (United States)

    Verdaasdonk, Rudolf M.; Wedzinga, Rosaline; van Montfrans, Bibi; Stok, Mirte; Klaessens, John; van der Veen, Albert

    2016-03-01

    The significant increase of skin cancer occurring in the western world is attributed to longer sun expose during leisure time. For prevention, people should become aware of the risks of UV light exposure by showing skin damage and the protective effect of sunscreen with an UV camera. An UV awareness imaging system optimized for 365 nm (UV-A) was develop using consumer components being interactive, safe and mobile. A Sony NEX5t camera was adapted to full spectral range. In addition, UV transparent lenses and filters were selected based on spectral characteristics measured (Schott S8612 and Hoya U-340 filters) to obtain the highest contrast for e.g. melanin spots and wrinkles on the skin. For uniform UV illumination, 2 facial tanner units were adapted with UV 365 nm black light fluorescent tubes. Safety of the UV illumination was determined relative to the sun and with absolute irradiance measurements at the working distance. A maximum exposure time over 15 minutes was calculate according the international safety standards. The UV camera was successfully demonstrated during the Dutch National Skin Cancer day and was well received by dermatologists and participating public. Especially, the 'black paint' effect putting sun screen on the face was dramatic and contributed to the awareness of regions on the face what are likely to be missed applying sunscreen. The UV imaging system shows to be promising for diagnostics and clinical studies in dermatology and potentially in other areas (dentistry and ophthalmology)

  17. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  18. HONEY -- The Honeywell Camera

    Science.gov (United States)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  19. Post-trial anatomical frame alignment procedure for comparison of 3D joint angle measurement from magnetic/inertial measurement units and camera-based systems.

    Science.gov (United States)

    Li, Qingguo; Zhang, Jun-Tian

    2014-11-01

    Magnetic and inertial measurement units (MIMUs) have been widely used as an alternative to traditional camera-based motion capture systems for 3D joint kinematics measurement. Since these sensors do not directly measure position, a pre-trial anatomical calibration, either with the assistance of a special protocol/apparatus or with another motion capture system is required to establish the transformation matrices between the local sensor frame and the anatomical frame (AF) of each body segment on which the sensors are attached. Because the axes of AFs are often used as the rotational axes in the joint angle calculation, any difference in the AF determination will cause discrepancies in the calculated joint angles. Therefore, a direct comparison of joint angles between MIMU systems and camera-based systems is less meaningful because the calculated joint angles contain a systemic error due to the differences in the AF determination. To solve this problem a new post-trial AF alignment procedure is proposed. By correcting the AF misalignments, the joint angle differences caused by the difference in AF determination are eliminated and the remaining discrepancies are mainly from the measurement accuracy of the systems themselves. Lower limb joint angles from 30 walking trials were used to validate the effectiveness of the proposed AF alignment procedure. This technique could serve as a new means for calibrating magnetic/inertial sensor-based motion capture systems and correcting for AF misalignment in scenarios where joint angles are compared directly.

  20. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  1. Research of Camera Synchronous Capture System Under External Trigger Mode%外触发方式下相机同步抓拍系统研究

    Institute of Scientific and Technical Information of China (English)

    李超; 刘树昌; 刘鹏; 周大勇; 王雪

    2014-01-01

    As the laser pulse frequency is low and the pulse’s width is ns grade in the pulse laser detection system ,it’s difficult to control the accurate exposure moment of the digital CCD camera .In order to overcome the difficulty that the trigger signal for the digital CCD camera and the laser pulse signal are not synchronous , a method that combining the “light trigger ,electrical synchronization” with advanced prediction is put for-ward .BD2/GPS module provided the PPS pulse signal as the clock reference ,combined with the FPGA’s high speed characteristic ,so the digital CCD camera could capture the laser spot image exactly ,and control the best integral time of the digital CCD camera .The experiment indicates that this method can improve the capturing precision of the digital CCD camera and the SNR of laser spot image .%脉冲激光检测系统中,由于激光器发射的激光脉冲频率低,且脉宽仅为纳秒级,所以在远距离测试时,很难控制数字 CCD(Charge Coupled Device)相机的精准曝光时刻。为了克服触发信号与激光脉冲信号不同步的问题,本文提出了“光触发,电同步”与“超前预测”相结合的方法。以 BD2(BeiDou2 Navigation Satel-lite System )/GPS(Global Positioning System )模块提供的精确秒脉冲 pps(pulse per second)信号作为时钟基准,同时结合 FPGA(Field Programmable Gate Array )高速的特点,实现数字 CCD 相机对激光光斑图像的准确捕获,控制数字 CCD 相机的最佳积分时间。实验结果表明,该方法不仅提高了数字 CCD 相机的抓拍精度,也提高了图像的信噪比。

  2. Monitoring system for phreatic eruptions and thermal behavior on Poás volcano hyperacidic lake, with permanent IR and HD cameras

    Science.gov (United States)

    Ramirez, C. J.; Mora-Amador, R. A., Sr.; Alpizar Segura, Y.; González, G.

    2015-12-01

    Monitoring volcanoes have been on the past decades an expanding matter, one of the rising techniques that involve new technology is the digital video surveillance, and the automated software that come within, now is possible if you have the budget and some facilities on site, to set up a real-time network of high definition video cameras, some of them even with special features like infrared, thermal, ultraviolet, etc. That can make easier or harder the analysis of volcanic phenomena like lava eruptions, phreatic eruption, plume speed, lava flows, close/open vents, just to mention some of the many application of these cameras. We present the methodology of the installation at Poás volcano of a real-time system for processing and storing HD and thermal images and video, also the process to install and acquired the HD and IR cameras, towers, solar panels and radios to transmit the data on a volcano located at the tropics, plus what volcanic areas are our goal and why. On the other hand we show the hardware and software we consider necessary to carry on our project. Finally we show some early data examples of upwelling areas on the Poás volcano hyperacidic lake and the relation with lake phreatic eruptions, also some data of increasing temperature on an old dome wall and the suddenly wall explosions, and the use of IR video for measuring plume speed and contour for use on combination with DOAS or FTIR measurements.

  3. X-ray Streak Camera Cathode Development and Timing Accuracy of the 4w UV Fiducial System at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Opachich, Y P; Palmer, N; Homoelle, D; Hatch, B W; Bell, P; Bradley, D; Kalantar, D; Browning, D; Landen, O

    2012-05-02

    The convergent ablator experiments at the National Ignition Facility (NIF) are designed to measure the peak velocity and remaining ablator mass of an indirectly driven imploding capsule. Such a measurement can be performed using an x-ray source to backlight the capsule and an x-ray streak camera to record the capsule as it implodes. The ultimate goal of this experiment is to achieve an accuracy of 2% in the velocity measurement, which translates to a {+-}2 ps temporal accuracy over any 300 ps interval for the streak camera. In order to achieve this, a 4-{omega} (263nm) temporal fiducial system has been implemented for the x-ray streak camera at NIF. Aluminum, Titanium, Gold and Silver photocathode materials have been tested. Aluminum showed the highest quantum efficiency, with five times more peak signal counts per fiducial pulse when compared to Gold. The fiducial pulse data was analyzed to determine the centroiding a statistical accuracy for incident laser pulse energies of 1 and 10 nJ, showing an accuracy of {+-}1.6 ps and {+-}0.7 ps respectively.

  4. Rice Crop Field Monitoring System with Radio Controlled Helicopter Based Near Infrared Cameras Through Nitrogen Content Estimation and Its Distribution Monitoring

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-03-01

    Full Text Available Rice crop field monitoring system with radio controlled helicopter based near infrared cameras is proposed together with nitrogen content estimation method for monitoring its distribution in the field in concern. Through experiments at the Saga Prefectural Agricultural Research Institute: SPARI, it is found that the proposed system works well for monitoring nitrogen content in the rice crop which indicates quality of the rice crop and its distribution in the field in concern. Therefore, it becomes available to maintain the rice crop fields in terms of quality control.

  5. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  6. Computational cameras: convergence of optics and processing.

    Science.gov (United States)

    Zhou, Changyin; Nayar, Shree K

    2011-12-01

    A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

  7. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  8. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of