WorldWideScience

Sample records for head camera views

  1. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  2. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  3. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  4. Head-positioning scintillation camera and head holder therefor

    International Nuclear Information System (INIS)

    Kay, T.D.

    1976-01-01

    A holder for immobilizing the head of a patient undergoing a vertex brain scan by a Gamma Scintillation Camera is described. The holder has a uniquely designed shape capable of comfortably supporting the head. In addition, this holder can be both adjustably and removably utilized in combination with the scintillation camera so as to enable the brain scan operation to take place while the patient is in the seated position

  5. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    Science.gov (United States)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  6. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  7. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  8. Multi-view collimators for scintillation cameras

    International Nuclear Information System (INIS)

    Hatton, J.; Grenier, R.P.

    1982-01-01

    This patent specification describes a collimator for obtaining multiple images of a portion of a body with a scintillation camera comprises a body of radiation-impervious material defining two or more groups of channels each group comprising a plurality of parallel channels having axes intersecting the portion of the body being viewed on one side of the collimator and intersecting the input surface of the camera on the other side of the collimator to produce a single view of said body, a number of different such views of said body being provided by each of said groups of channels, each axis of each channel lying in a plane approximately perpendicular to the plane of the input surface of the camera and all of such planes containing said axes being approximately parallel to each other. (author)

  9. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Registration of an on-axis see-through head-mounted display and camera system

    Science.gov (United States)

    Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli

    2005-02-01

    An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.

  11. A direct-view customer-oriented digital holographic camera

    Science.gov (United States)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  12. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  13. Immersive viewing engine

    Science.gov (United States)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  14. Photomultiplier tube artifacts on 67Ga-citrate imaging caused by loss of correction floods due to an off-peak status of one head of a dual-head γ-camera.

    Science.gov (United States)

    Glaser, Joseph E; Song, Na; Jaini, Sridivya; Lorenzo, Ruth; Love, Charito

    2012-12-01

    γ-cameras use flood-field corrections to ensure image uniformity during clinical imaging. A loss or corruption of the correction data of one head of a dual-head camera can result in an off-peak artifactual appearance. We present our experience with the occurrence of such an incident on a (67)Ga scan. A patient was referred for a whole-body (67)Ga scan to evaluate for causes of neutropenic fever. Whole-body planar and static images of the head, chest, abdomen, pelvis, and lower extremities in multiple projections were obtained. Whole-body images showed decreased image quality on the anterior view obtained with detector 1 and an unremarkable posterior image obtained with detector 2. A problem with detector 2 was suspected, and additional static images were obtained after rotation of the detector heads. The posterior images taken with detector 1 showed photomultiplier tube outlines. The anterior images taken with detector 2 showed improved count and image quality. It was later found that the uniformity map for detector 2 had been lost and that this software malfunction led to the resulting imaging problem. When artifacts with an off-peak appearance are seen on scintigraphic images, evaluation of possible causes should include not only isotope window settings but also an incorrect or corrupted uniformity map.

  15. Monte Carlo simulation for dual head gamma camera

    International Nuclear Information System (INIS)

    Osman, Yousif Bashir Soliman

    2015-12-01

    Monte Carlo (MC) simulation technique was used widely in medical physics applications. In nuclear medicine MC was used to design new medical imaging devices such as positron emission tomography (PET), gamma camera and single photon emission computed tomography (SPECT). Also it can be used to study the factors affecting image quality and internal dosimetry, Gate is on of monte Carlo code that has a number of advantages for simulation of SPECT and PET. There is a limit accessibilities in machines which are used in clinics because of the work load of machines. This makes it hard to evaluate some factors effecting machine performance which must be evaluated routinely. Also because of difficulties of carrying out scientific research and training of students, MC model can be optimum solution for the problem. The aim of this study was to use gate monte Carlo code to model Nucline spirit, medico dual head gamma camera hosted in radiation and isotopes center of Khartoum which is equipped with low energy general purpose LEGP collimators. This was used model to evaluate spatial resolution and sensitivity which is important factor affecting image quality and to demonstrate the validity of gate by comparing experimental results with simulation results on spatial resolution. The gate model of Nuclide spirit, medico dual head gamma camera was developed by applying manufacturer specifications. Then simulation was run. In evaluation of spatial resolution the FWHM was calculated from image profile of line source of Tc 99m gammas emitter of energy 140 KeV at different distances from modeled camera head at 5,10,15,20,22,27,32,37 cm and for these distances the spatial resolution was founded to be 5.76, 7.73, 10.7, 13.8, 14.01,16.91, 19.75 and 21.9 mm, respectively. These results showed a decrement of spatial resolution with increase of the distance between object (line source) and collimator in linear manner. FWHM calculated at 10 cm was compared with experimental results. The

  16. Determining the Position of Head and Shoulders in Neurological Practice with the use of Cameras

    Directory of Open Access Journals (Sweden)

    P. Kutílek

    2011-01-01

    Full Text Available The posture of the head and shoulders can be influenced negatively by many diseases of the nervous system, visual and vestibular systems. We have designed a system and a set of procedures for evaluating the inclination (roll, flexion (pitch and rotation (yaw of the head and the inclination (roll and rotation (yaw of the shoulders. A new computational algorithm allows non-invasive and non-contact head and shoulder position measurement using two cameras mounted opposite each other, and the displacement of the optical axis of the cameras is also corrected.

  17. A camera based calculation of 99m Tc-MAG-3 clearance using conjugate views method

    International Nuclear Information System (INIS)

    Hojabr, M.; Rajabi, H.; Eftekhari, M.

    2004-01-01

    Background: measurement of absolute or different renal function using radiotracers plays an important role in the clinical management of various renal diseases. Gamma camera quantitative methods is approximations of renal clearance may potentially be as accurate as plasma clearance methods. However some critical factors such as kidney depth and background counts are still troublesome in the use of this technique. In this study the conjugate-view method along with some background correction technique have been used for the measurement of renal activity in 99m Tc- MAG 3 renography. Transmission data were used for attenuation correction and the source volume was considered for accurate background subtraction. Materials and methods: the study was performed in 35 adult patients referred to our department for conventional renography and ERPF calculation. Depending on patients weight approximately 10-15 mCi 99 Tc-MAG 3 was injected in the form of a sharp bolus and 60 frames of 1 second followed by 174 frames of 10 seconds were acquired for each patient. Imaging was performed on a dual-head gamma camera(SOLUS; SunSpark10, ADAC Laboratories, Milpitas, CA) anterior and posterior views were acquired simultaneously. A LEHR collimator was used to correct the scatter for the emission and transmission images. Buijs factor was applied on background counts before background correction (Rutland-Patlak equation). gamma camera clearance was calculated using renal uptake in 1-2, 1.5-2.5, 2-3 min. The same procedure was repeated for both renograms obtained from posterior projection and conjugated views. The plasma clearance was also directly calculated by three blood samples obtained at 40, 80, 120 min after injection. Results: 99 Tc-MAG 3 clearance using direct sampling method were used as reference values and compared to the results obtained from the renograms. The maximum correlation was found between conjugate view clearance at 2-3 min (R=0.99, R 2 =0.98, SE=15). Conventional

  18. STS-37 Breakfast / Ingress / Launch & ISO Camera Views

    Science.gov (United States)

    1991-01-01

    The primary objective of the STS-37 mission was to deploy the Gamma Ray Observatory. The mission was launched at 9:22:44 am on April 5, 1991, onboard the space shuttle Atlantis. The mission was led by Commander Steven Nagel. The crew was Pilot Kenneth Cameron and Mission Specialists Jerry Ross, Jay Apt, and Linda Godwing. This videotape shows the crew having breakfast on the launch day, with the narrator introducing them. It then shows the crew's final preparations and the entry into the shuttle, while the narrator gives information about each of the crew members. The countdown and launch is shown including the shuttle separation from the solid rocket boosters. The launch is reshown from 17 different camera views. Some of the other camera views were in black and white.

  19. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  20. Single camera multi-view anthropometric measurement of human height and mid-upper arm circumference using linear regression.

    Science.gov (United States)

    Liu, Yingying; Sowmya, Arcot; Khamis, Heba

    2018-01-01

    Manually measured anthropometric quantities are used in many applications including human malnutrition assessment. Training is required to collect anthropometric measurements manually, which is not ideal in resource-constrained environments. Photogrammetric methods have been gaining attention in recent years, due to the availability and affordability of digital cameras. The primary goal is to demonstrate that height and mid-upper arm circumference (MUAC)-indicators of malnutrition-can be accurately estimated by applying linear regression to distance measurements from photographs of participants taken from five views, and determine the optimal view combinations. A secondary goal is to observe the effect on estimate error of two approaches which reduce complexity of the setup, computational requirements and the expertise required of the observer. Thirty-one participants (11 female, 20 male; 18-37 years) were photographed from five views. Distances were computed using both camera calibration and reference object techniques from manually annotated photos. To estimate height, linear regression was applied to the distances between the top of the participants head and the floor, as well as the height of a bounding box enclosing the participant's silhouette which eliminates the need to identify the floor. To estimate MUAC, linear regression was applied to the mid-upper arm width. Estimates were computed for all view combinations and performance was compared to other photogrammetric methods from the literature-linear distance method for height, and shape models for MUAC. The mean absolute difference (MAD) between the linear regression estimates and manual measurements were smaller compared to other methods. For the optimal view combinations (smallest MAD), the technical error of measurement and coefficient of reliability also indicate the linear regression methods are more reliable. The optimal view combination was the front and side views. When estimating height by linear

  1. JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.

    Science.gov (United States)

    Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun

    2017-03-01

    Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.

  2. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  3. Driver head pose tracking with thermal camera

    Science.gov (United States)

    Bole, S.; Fournier, C.; Lavergne, C.; Druart, G.; Lépine, T.

    2016-09-01

    Head pose can be seen as a coarse estimation of gaze direction. In automotive industry, knowledge about gaze direction could optimize Human-Machine Interface (HMI) and Advanced Driver Assistance Systems (ADAS). Pose estimation systems are often based on camera when applications have to be contactless. In this paper, we explore uncooled thermal imagery (8-14μm) for its intrinsic night vision capabilities and for its invariance versus lighting variations. Two methods are implemented and compared, both are aided by a 3D model of the head. The 3D model, mapped with thermal texture, allows to synthesize a base of 2D projected models, differently oriented and labeled in yaw and pitch. The first method is based on keypoints. Keypoints of models are matched with those of the query image. These sets of matchings, aided with the 3D shape of the model, allow to estimate 3D pose. The second method is a global appearance approach. Among all 2D models of the base, algorithm searches the one which is the closest to the query image thanks to a weighted least squares difference.

  4. A procedure for generating quantitative 3-D camera views of tokamak divertors

    International Nuclear Information System (INIS)

    Edmonds, P.H.; Medley, S.S.

    1996-05-01

    A procedure is described for precision modeling of the views for imaging diagnostics monitoring tokamak internal components, particularly high heat flux divertor components. These models are required to enable predictions of resolution and viewing angle for the available viewing locations. Because of the oblique views expected for slot divertors, fully 3-D perspective imaging is required. A suite of matched 3-D CAD, graphics and animation applications are used to provide a fast and flexible technique for reproducing these views. An analytic calculation of the resolution and viewing incidence angle is developed to validate the results of the modeling procedures. The calculation is applicable to any viewed surface describable with a coordinate array. The Tokamak Physics Experiment (TPX) diagnostics for infrared viewing are used as an example to demonstrate the implementation of the tools. For the TPX experiment the available locations are severely constrained by access limitations at the end resulting images are marginal in both resolution and viewing incidence angle. Full coverage of the divertor is possible if an array of cameras is installed at 45 degree toroidal intervals. Two poloidal locations are required in order to view both the upper and lower divertors. The procedures described here provide a complete design tool for in-vessel viewing, both for camera location and for identification of viewed surfaces. Additionally these same tools can be used for the interpretation of the actual images obtained by the actual diagnostic

  5. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  6. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  7. Determination of kidney function with 99mTc-DTPA renography using a dual-head camera

    DEFF Research Database (Denmark)

    Madsen, Claus J; Møller, Michael L; Zerahn, Bo

    2013-01-01

    Single-head gamma camera renography has been used for decades to estimate kidney function. An estimate of the glomerular filtration rate (GFR) can be obtained using Tc-diethylenetriaminepentaacetic acid (Tc-DTPA). However, because of differing attenuation, an error is introduced when the kidney...

  8. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Science.gov (United States)

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  9. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng-Fei Wu

    2017-06-01

    Full Text Available Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI. To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  10. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  11. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  12. What about getting physiological information into dynamic gamma camera studies

    International Nuclear Information System (INIS)

    Kiuru, A.; Nickles, R. J.; Holden, J. E.; Polcyn, R. E.

    1976-01-01

    A general technique has been developed for the multiplexing of time dependent analog signals into the individual frames of a gamma camera dynamic function study. A pulse train, frequency-modulated by the physiological signal, is capacitively coupled to the preamplifier servicing anyone of the outer phototubes of the camera head. These negative tail pulses imitate photoevents occuring at a point outside of the camera field of view, chosen to occupy a data cell in an unused corner of the computer-stored square image. By defining a region of interest around this cell, the resulting time-activity curve displays the physiological variable in temporal synchrony with the radiotracer distribution. (author)

  13. Development of a High Sensitivity Digital Cerenkov Viewing Device. Prototype Digital Cerenkov Viewing Device. Field test in Sweden

    International Nuclear Information System (INIS)

    Chen, J.D.; Gerwing, A.F.; Lewis, P.D.; Larsson, M.; Jansson, K.; Lindberg, B.; Sundkvist, E.; Ohlsson, M.

    2002-05-01

    The Swedish and Canadian Safeguards Support Programs have developed a prototype Digital Cerenkov Viewing Device (DCVD) to verify long-cooled spent fuel. The instrument consists of a camera system and a custom portable computer equipped with a liquid crystal and a wearable heads-up display. The camera was coupled to a hardware user interface (HUI) and was operated with a computer program designed to image spent fuel and store the images. Measurements were taken at the CLAB facility on pressurized-water reactor fuel and non-fuel assemblies, a number of boiling-water reactor fuel assemblies, and long-cooled Aagesta fuel assemblies. The camera head, attached to the HUI, a battery-operated computer carried in a backpack and the heads-up display were field tested for portability. The ergonomics of this system is presented in the report. For the examination of long-cooled spent fuel, the camera head was mounted on a bracket that rested on the railing of a moving bridge. The DCVD instrument is approximately 100 times higher in sensitivity than the Mark IVe CVD. The oldest fuel with the lowest burnup at the CLAB facility was positively verified. The measurement capability of this instrument greatly exceeds the verification criteria of 10,000 MWd/t U and 40 years cooling

  14. Value of coincidence gamma camera PET for diagnosing head and neck tumors: functional imaging and image coregistration

    International Nuclear Information System (INIS)

    Dresel, S.; Brinkbaeumer, K.; Schmid, R.; Hahn, K.

    2001-01-01

    54 patients suffering from head and neck tumors (30 m, 24 f, age: 32-67 years) were examined using dedicated PET and coincidence gamma camera PET after injection of 185-350 MBq [ 18 F]FDG. Examinations were carried out on the dedicated PET first (Siemens ECAT Exact HR+) followed by a scan on the coincidence gamma camera PET (Picker Prism 2000 XP-PCD, Marconi Axis g-PET 2 AZ). Dedicated PET was acquired in 3D mode, coincidence gamma camera PET was performed in list mode using an axial filter. Reconstruction of data was performed iteratively on both, dedicated PET and coincidence gamma camera PET. All patients received a CT scan in multislice technique (Siemens Somatom Plus 4, Marconi MX 8000). Image coregistration was performed on an Odyssey workstation (Marconi). All findings have been verified by the gold standard histology or in case of negative histology by follow-up. Results: Using dedicated PET the primary or recurrent lesion was correctly diagnosed in 47/48 patients, using coincidence gamma camera PET in 46/48 patients and using CT in 25/48 patients. Metastatic disease in cervical lymph nodes was diagnosed in 17/18 patients with dedicated PET, in 16/18 patients with coincidence gamma camera PET and in 15/18 with CT. False-positive results with regard to lymph node metastasis were seen with one patient for dedicated PET and hybrid PET, respectively, and with 18 patients for CT. In a total of 11 patients unknown metastatic lesions were seen with dedicated PET and with coincidence gamma camera PET elsewhere in the body (lung: n = 7, bone: n = 3, liver: n = 1). Additional malignant disease other than the head and neck tumor was found in 4 patients. (orig.) [de

  15. The Theatricality of the Punctum: Re-Viewing Camera Lucida

    Directory of Open Access Journals (Sweden)

    Harry Robert Wilson

    2017-06-01

    Full Text Available I first encountered Roland Barthes’s Camera Lucida (1980 in 2012 when I was developing a performance on falling and photography. Since then I have re-encountered Barthes’s book annually as part of my practice-as-research PhD project on the relationships between performance and photography. This research project seeks to make performance work in response to Barthes’s book – to practice with Barthes in an exploration of theatricality, materiality and affect. This photo-essay weaves critical discourse with performance documentation to explore my relationship to Barthes’s book. Responding to Michael Fried’s claim that Barthes’s Camera Lucida is an exercise in “antitheatrical critical thought” (Fried 2008, 98 the essay seeks to re-view debates on theatricality and anti-theatricality in and around Camera Lucida. Specifically, by exploring Barthes’s conceptualisation of the pose I discuss how performance practice might re-theatricalise the punctum and challenge a supposed antitheatricalism in Barthes’s text. Additionally, I argue for Barthes’s book as an example of philosophy as performance and for my own work as an instance of performance philosophy.

  16. A data acquisition system for coincidence imaging using a conventional dual head gamma camera

    Science.gov (United States)

    Lewellen, T. K.; Miyaoka, R. S.; Jansen, F.; Kaplan, M. S.

    1997-06-01

    A low cost data acquisition system (DAS) was developed to acquire coincidence data from an unmodified General Electric Maxxus dual head scintillation camera. A high impedance pick-off circuit provides position and energy signals to the DAS without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Coincidence is determined with fast timing signals derived from constant fraction discriminators. A charge-integrating FERA 16 channel ADC feeds position and energy data to two CAMAC FERA memories operated as ping-pong buffers. A Macintosh PowerPC running Labview controls the system and reads the CAMAC memories. A CAMAC 12-channel scaler records singles and coincidence rate data. The system dead-time is approximately 10% at a coincidence rate of 4.0 kHz.

  17. A data acquisition system for coincidence imaging using a conventional dual head gamma camera

    International Nuclear Information System (INIS)

    Lewellen, T.K.; Miyaoka, R.S.; Kaplan, M.S.

    1996-01-01

    A low cost data acquisition system (DAS) was developed to acquire coincidence data from an unmodified General Electric Maxxus dual head scintillation camera. A high impedance pick-off circuit provides position and energy signals to the DAS without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Coincidence is determined with fast timing signals derived from constant fraction discriminators. A charge-integrating FERA 16 channel ADC feeds position and energy data to two CAMAC FERA memories operated as ping-pong buffers. A Macintosh PowerPC running Labview controls the system and reads the CAMAC memories. A CAMAC 12-channel scaler records singles and coincidence rate data. The system dead-time is approximately 10% at a coincidence rate of 4.0 kHz

  18. Rapid Pedestrian Detection with Vertical-view Camera%一种快速的俯视行人检测方法

    Institute of Scientific and Technical Information of China (English)

    唐春晖; 陈启军

    2012-01-01

    提出了一种新颖的单目俯视客流图像的快速行人检测方法。该方法是针对俯视行人头部或头肩部特征进行的检测的。根据行人头部在俯视图中的左、右两半对称,左、中、右结构,上、下两部分基本相同等空间分布特征,利用积分图计算、比较目标区域内矩形子决的灰受累加值来实现这类特征的检测;此外,融合头部区域颜色均匀的特征,计算并判别否是行人。实验证明,该检测方法是快速和有效的。%A novel and rapid method for monocular pedestrian detection with a vertical-view camera was proposed. Based on the symmetry or asymmetry of head (or head and shoulder) in the space distribution, which could be calculated by integral images, pedestrian heads could be estimated quickly in a verticalview image, combined with regional characteristics of uniform color in a head. The experiment shows a promising result.

  19. Electron-tracking Compton gamma-ray camera for small animal and phantom imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kabuki, Shigeto, E-mail: kabuki@cr.scphys.kyoto-u.ac.j [Department of Physics, Gradulate School of Science, Kyoto University, Kyoto 606-8502 (Japan); Kimura, Hiroyuki; Amano, Hiroo [Department of Patho-functional Bioanalysis, Graduate School of Pharmaceutical Sciences, Kyoto University, Kyoto 606-8501 (Japan); Nakamoto, Yuji [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Kyoto 606-8507 (Japan); Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki [Department of Physics, Gradulate School of Science, Kyoto University, Kyoto 606-8502 (Japan); Kawashima, Hidekazu [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Kyoto 606-8507 (Japan); Ueda, Masashi [Radioisotopes Research Labaoratory, Kyoto University Hospital, Kyoto 606-8507 (Japan); Okada, Tomohisa [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Kyoto 606-8507 (Japan); Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki [Department of Radiology, Keio University School of Medicine, Tokyo 160-8582 (Japan); Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji [Application Development Office, Hitachi Medical Corporation, Chiba 277-0804 (Japan); Ogawa, Koichi [Department of Electronic Informatics, Faculty of Engineering, Hosei University, Tokyo 184-8584 (Japan)

    2010-11-01

    We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.

  20. Rotation and direction judgment from visual images head-slaved in two and three degrees-of-freedom.

    Science.gov (United States)

    Adelstein, B D; Ellis, S R

    2000-03-01

    The contribution to spatial awareness of adding a roll degree-of-freedom (DOF) to telepresence camera platform yaw and pitch was examined in an experiment where subjects judged direction and rotation of stationary target markers in a remote scene. Subjects viewed the scene via head-slaved camera images in a head-mounted display. Elimination of the roll DOF affected rotation judgment, but only at extreme yaw and pitch combinations, and did not affect azimuth and elevation judgement. Systematic azimuth overshoot occurred regardless of roll condition. Observed rotation misjudgments are explained by kinematic models for eye-head direction of gaze.

  1. PET with a dual-head coincidence gamma camera in head and neck cancer: A comparison with computed tomography and dedicated PET

    International Nuclear Information System (INIS)

    Zimny, M.

    2001-01-01

    Positron emission tomography with 18 F-fluoro-deoxyglucose (FDG PET) is a promising imaging tool for detecting and staging of primary or recurrent head and neck cancer. The aim of this study was to evaluate a dual-head gamma camera modified for coincidence detection (KGK-PET) in comparison to computed tomography (CT) and dedicated PET (dPET). 50 patients with known or suspected primary or recurrent head and neck cancer were enrolled. 32 patients underwent KGK-PET and dPET using a one-day protocol. The sensitivity for the detection of primary/ recurrent head and neck cancer for KGK-PET and CT was 80% and 54%, respectively, specificity was 73% and 82%, respectively. The sensitivity and specificity for the detection of lymph node metastases based on neck sides with KGK-PET was 71% (CT: 65%) and 88% (CT: 89%) respectively. In comparison to dPET, KGK-PET revealed concordant results in 32/32 patients with respect to primary tumor/recurrent disease and in 55/60 evaluated neck sides. All involved neck sides that were missed by KGK-PET were also negative with dPET. These results indicate that in patients with head and neck cancer KGK-PET reveals information, that are similar to dPET and complementary to CT. (orig.) [de

  2. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    Science.gov (United States)

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  3. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Science.gov (United States)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  4. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Directory of Open Access Journals (Sweden)

    K. Thoeni

    2014-06-01

    Full Text Available This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS. Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp, iPhone 4S (8 Mp, Panasonic Lumix LX5 (9.5 Mp, Panasonic Lumix ZS20 (14.1 Mp and Canon EOS 7D (18 Mp. The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  5. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    Science.gov (United States)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  6. Markerless Augmented Reality via Stereo Video See-Through Head-Mounted Display Device

    Directory of Open Access Journals (Sweden)

    Chung-Hung Hsieh

    2015-01-01

    Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.

  7. A SPECT demonstrator—revival of a gamma camera

    Science.gov (United States)

    Valastyán, I.; Kerek, A.; Molnár, J.; Novák, D.; Végh, J.; Emri, M.; Trón, L.

    2006-07-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied.

  8. A SPECT demonstrator-revival of a gamma camera

    International Nuclear Information System (INIS)

    Valastyan, I.; Kerek, A.; Molnar, J.; Novak, D.; Vegh, J.; Emri, M.; Tron, L.

    2006-01-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied

  9. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  10. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  11. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    Science.gov (United States)

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  12. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  13. Mars Orbiter Camera Views the 'Face on Mars' - Best View from Viking

    Science.gov (United States)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.This Viking Orbiter image is one of the best Viking pictures of the area Cydonia where the 'Face' is located. Marked on the image are the 'footprint' of the high resolution (narrow angle) Mars Orbiter Camera image and the area seen in enlarged views (dashed box). See PIA01440-1442 for these images in raw and processed form.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  14. Usefulness of FDG PET for nodal staging using a dual head coincidence camera in patients with lung cancer

    International Nuclear Information System (INIS)

    Yoon, Seok Nam; Park, Chan H.; Lee, Myoung Hoon; Hwang, Kyung Hoon; Hwang, Kyung Hoon

    2001-01-01

    Staging of lung cancer requires an accurate evaluation of the mediastinum. Positron imaging with dual head cameras may be not as sensitive as dedicated PET. Therefore, the purpose of the study was to evaluated the usefulness of F-18 FDG coincidence (CoDe) PET using a dual-head gamma camera in the nodal staging of the lung cancer. CoDe-PET studies were performed in 51 patients with histologically proven non small cell lung cancer. CoDe-PET began 60 minutes after the injection of 111-185 MBq of F-18 FDG. CoDe-PET was performed using a dual-head gamma camera equipped with coincidence detection circuitry (Elscints Varicam, Haifa, lsrael). There was no attenuation correction made and reconstruction was done using a filtered back-projection. Surgery was performed in 49 patients CoDe-PET studies were evaluated visually. Any focal increased uptake was considered abnormal. The nodal stating of CoDe-PET studies were evaluated visually. Any focal increased uptake was considered abnormal. The nodal staging of CoDe-PET and of CT were compared with the nodal stating of surgical (49) and mediastinoscopical (2) pathology. All primary lung lesions were hypermetabolic and easily visualized. Compared with surgical nodal staging as a gold standard, false positives occurred in 13 CoDe PET and 17 CT studies and false negative occurred in 5 CoDe-PET and 4 CT studies. Assessment of lymph node involvement by CoDe-PET depicted a sensitivity of 67%, specificity of 64% and accuracy of 65%. CT revealed a sensitivity of 73%, specificity of 53% and accuracy of 59% in the assessment of lymph node involvement. The detection of primary lesions were 100% but nodal staging was suboptimal for routine clinical use. This is mainly due to limited resolution of our system

  15. Taking it all in : special camera films in 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2006-07-15

    Details of a 360-degree digital camera designed by Immersive Media Telemmersion were presented. The camera has been employed extensively in the United States for homeland security and intelligence-gathering purposes. In Canada, the cameras are now being used by the oil and gas industry. The camera has 11 lenses pointing in all directions and generates high resolution movies that can be analyzed frame-by-frame from every angle. Global positioning satellite data can be gathered during filming so that operators can pinpoint any location. The 11 video streams use more than 100 million pixels per second. After filming, the system displays synchronized, high-resolution video streams, capturing a full motion spherical world complete with directional sound. It can be viewed on a computer monitor, video screen, or head-mounted display. Pembina Pipeline Corporation recently used the Telemmersion system to plot a proposed pipeline route between Alberta's Athabasca region and Edmonton. It was estimated that more than $50,000 was saved by using the camera. The resulting video has been viewed by Pembina's engineering, environmental and geotechnical groups who were able to accurately note the route's river crossings. The cameras were also used to estimate timber salvage. Footage was then given to the operations group, to help staff familiarize themselves with the terrain, the proposed route's right-of-way, and the number of water crossings and access points. Oil and gas operators have also used the equipment on a recently acquired block of land to select well sites. 4 figs.

  16. A head-mounted display system for augmented reality: Initial evaluation for interventional MRI

    International Nuclear Information System (INIS)

    Wendt, M.; Wacker, F.K.

    2003-01-01

    Purpose: To discuss the technical details of a head mounted display with an augmented reality (AR) system and to describe a first pre-clinical evaluation in interventional MRI. Method: The AR system consists of a video-see-through head mounted display (HMD), mounted with a mini video camera for tracking and a stereo pair of mini cameras that capture live images of the scene. The live video view of the phantom/patient is augmented with graphical representations of anatomical structures from MRI image data and is displayed on the HMD. The application of the AR system with interventional MRI was tested using a MRI data set of the head and a head phantom. Results: The HMD enables the user to move around and observe the scene dynamically from various viewpoints. Within a short time the natural hand-eye coordination can easily be adapted to the slightly different view. The 3D perception is based on stereo and kinetic depth cues. A circular target with a diameter of 0.5 square centimeter was hit in 19 of 20 attempts. In a first evaluation the MRI image data augmented reality scene of a head phantom allowed good planning and precise simulation of a puncture. Conclusion: The HMD in combination with AR provides a direct, intuitive guidance for interventional MR procedures. (orig.) [de

  17. Immersive vision assisted remote teleoperation using head mounted displays

    International Nuclear Information System (INIS)

    Vakkapatla, Veerendrababu; Singh, Ashutosh Pratap; Rakesh, V.; Rajagopalan, C.; Murugan, S.; Sai Baba, M.

    2016-01-01

    Handling and inspection of irradiated material is inevitable in nuclear industry. Hot cells are shielded radiation containment chambers equipped with master slave manipulators that facilitates remote handling. The existing methods using viewing windows and cameras for viewing the contents of hot cell to manipulate the radioactive elements have problems such as optical distortion, limited distance teleoperation, limited field of view that lead to inefficient operation. This paper presents a method of achieving immersive teleoperation to operate the master slave manipulator in hot cells by exploiting the advanced tracking and display capabilities of head mounted display devices. (author)

  18. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  19. Using a smartphone as a tool to measure compensatory and anomalous head positions.

    Science.gov (United States)

    Farah, Michelle de Lima; Santinello, Murillo; Carvalho, Luis Eduardo Morato Rebouças de; Uesugui, Carlos Fumiaki; Barcellos, Ronaldo Boaventura

    2018-01-01

    To describe a new method for measuring anomalous head positions by using a cell phone. The photo rotation feature of the iPhone® PHOTOS application was used. With the patient seated on a chair, a horizontal stripe was fixed on the wall in the background and a sagittal stripe was fixed on the seat. Photographs were obtained in the following views: front view (photographs A and B; with the head tilted over one shoulder) and upper axial view (photographs C and D; viewing the forehead and nose) (A and C are without camera rotation, and B and D are with camera rotation). A blank sheet of paper with two straight lines making a 32-degree angle was also photographed. Thirty examiners were instructed to measure the rotation required to align the reference points with the orthogonal axes. In order to set benchmarks to be compared with the measurements obtained by the examiners, blue lines were digitally added to the front and upper view photographs. In the photograph of the sheet of paper (p=0.380 and a=5%), the observed values did not differ statistically from the known value of 32 degrees. Mean measurements were as follows: front view photograph A, 22.8 ± 2.77; front view B, 21.4 ± 1.61; upper view C, 19.6 ± 2.36; and upper view D, 20.1 ± 2.33 degrees. The mean difference in measurements for the front view photograph A was -1.88 (95% CI -2.88 to -0.88), front view B was -0.37 (95% CI -0.97 to 0.17), upper view C was 1.43 (95% CI 0.55 to 2.24), and upper view D was 1.87 (95% CI 1.02 to 2.77). The method used in this study for measuring anomalous head position is reproducible, with maximum variations for AHPs of 2.88 degrees around the X-axis and 2.77 degrees around the Y-axis.

  20. Using a smartphone as a tool to measure compensatory and anomalous head positions

    Directory of Open Access Journals (Sweden)

    Michelle de Lima Farah

    Full Text Available ABSTRACT Purpose: To describe a new method for measuring anomalous head positions by using a cell phone. Methods: The photo rotation feature of the iPhone® PHOTOS application was used. With the patient seated on a chair, a horizontal stripe was fixed on the wall in the background and a sagittal stripe was fixed on the seat. Photographs were obtained in the following views: front view (photographs A and B; with the head tilted over one shoulder and upper axial view (photographs C and D; viewing the forehead and nose (A and C are without camera rotation, and B and D are with camera rotation. A blank sheet of paper with two straight lines making a 32-degree angle was also photographed. Thirty examiners were instructed to measure the rotation required to align the reference points with the orthogonal axes. In order to set benchmarks to be compared with the measurements obtained by the examiners, blue lines were digitally added to the front and upper view photographs. Results: In the photograph of the sheet of paper (p=0.380 and a=5%, the observed values did not differ statistically from the known value of 32 degrees. Mean measurements were as follows: front view photograph A, 22.8 ± 2.77; front view B, 21.4 ± 1.61; upper view C, 19.6 ± 2.36; and upper view D, 20.1 ± 2.33 degrees. The mean difference in measurements for the front view photograph A was -1.88 (95% CI -2.88 to -0.88, front view B was -0.37 (95% CI -0.97 to 0.17, upper view C was 1.43 (95% CI 0.55 to 2.24, and upper view D was 1.87 (95% CI 1.02 to 2.77. Conclusion: The method used in this study for measuring anomalous head position is reproducible, with maximum variations for AHPs of 2.88 degrees around the X-axis and 2.77 degrees around the Y-axis.

  1. A specially designed cut-off gamma camera for high resolution SPECT of the brain

    International Nuclear Information System (INIS)

    Larsson, S.A.; Bergstrand, G.; Bergstedt, H.; Berg, J.; Flygare, O.; Schnell, P.O.; Anderson, N.; Lagergren, C.

    1984-01-01

    A modern gamma camera system for Single Photon Emission Computed Tomography (SPECT) has been modified in order to optimize examinations of the head. By cutting off a part of the detector housing at one edge, it has been possible to rotate the camera close to the skull, still covering the entire brain and the skull base. The minimum radius of rotation used was thereby reduced, in the mean, from 21.2 cm to 13.0 cm in examination of 18 patients. In combination with an adjustment of the 64 x 64 acquisition matrix to a field of view of 26x26 cm/sup 2/, the spatial resolution improved from 18.6 mm (FWHM) to 12.6 +- 0.3 mm (FWHM) using the conventional LEGP-collimator and to 10.4 +- 0.3 mm (FWHM) using the LEHR-collimator. No other modification than a slight cut of the light guide was made in the internal construction of the camera. Thus, the physical properties of the detector head are not essentially changed from those of a non-modified unit. The improved spatial resolution of the cut-off camera SPECT-system implies certain clinical advantages in studies of the brain, the cerebrospinal fluid (CSF)-space and the skull base

  2. BrachyView: proof-of-principle of a novel in-body gamma camera for low dose-rate prostate brachytherapy.

    Science.gov (United States)

    Petasecca, M; Loo, K J; Safavi-Naeini, M; Han, Z; Metcalfe, P E; Meikle, S; Pospisil, S; Jakubek, J; Bucci, J A; Zaider, M; Lerch, M L F; Qi, Y; Rosenfeld, A B

    2013-04-01

    The conformity of the achieved dose distribution to the treatment plan strongly correlates with the accuracy of seed implantation in a prostate brachytherapy treatment procedure. Incorrect seed placement leads to both short and long term complications, including urethral and rectal toxicity. The authors present BrachyView, a novel concept of a fast intraoperative treatment planning system, to provide real-time seed placement information based on in-body gamma camera data. BrachyView combines the high spatial resolution of a pixellated silicon detector (Medipix2) with the volumetric information acquired by a transrectal ultrasound (TRUS). The two systems will be embedded in the same probe so as to provide anatomically correct seed positions for intraoperative planning and postimplant dosimetry. Dosimetric calculations are based on the TG-43 method using the real position of the seeds. The purpose of this paper is to demonstrate the feasibility of BrachyView using the Medipix2 pixel detector and a pinhole collimator to reconstruct the real-time 3D position of low dose-rate brachytherapy seeds in a phantom. BrachyView incorporates three Medipix2 detectors coupled to a multipinhole collimator. Three-dimensionally triangulated seed positions from multiple planar images are used to determine the seed placement in a PMMA prostate phantom in real time. MATLAB codes were used to test the reconstruction method and to optimize the device geometry. The results presented in this paper show a 3D position reconstruction accuracy of the seed in the range of 0.5-3 mm for a 10-60 mm seed-to-detector distance interval (Z direction), respectively. The BrachyView system also demonstrates a spatial resolution of 0.25 mm in the XY plane for sources at 10 mm distance from Medipix2 detector plane, comparable to the theoretical value calculated for an equivalent gamma camera arrangement. The authors successfully demonstrated the capability of BrachyView for real-time imaging (using a 3 s

  3. BrachyView: Proof-of-principle of a novel in-body gamma camera for low dose-rate prostate brachytherapy

    International Nuclear Information System (INIS)

    Petasecca, M.; Loo, K. J.; Safavi-Naeini, M.; Han, Z.; Metcalfe, P. E.; Lerch, M. L. F.; Qi, Y.; Rosenfeld, A. B.; Meikle, S.; Pospisil, S.; Jakubek, J.; Bucci, J. A.; Zaider, M.

    2013-01-01

    Purpose: The conformity of the achieved dose distribution to the treatment plan strongly correlates with the accuracy of seed implantation in a prostate brachytherapy treatment procedure. Incorrect seed placement leads to both short and long term complications, including urethral and rectal toxicity. The authors present BrachyView, a novel concept of a fast intraoperative treatment planning system, to provide real-time seed placement information based on in-body gamma camera data. BrachyView combines the high spatial resolution of a pixellated silicon detector (Medipix2) with the volumetric information acquired by a transrectal ultrasound (TRUS). The two systems will be embedded in the same probe so as to provide anatomically correct seed positions for intraoperative planning and postimplant dosimetry. Dosimetric calculations are based on the TG-43 method using the real position of the seeds. The purpose of this paper is to demonstrate the feasibility of BrachyView using the Medipix2 pixel detector and a pinhole collimator to reconstruct the real-time 3D position of low dose-rate brachytherapy seeds in a phantom. Methods: BrachyView incorporates three Medipix2 detectors coupled to a multipinhole collimator. Three-dimensionally triangulated seed positions from multiple planar images are used to determine the seed placement in a PMMA prostate phantom in real time. MATLAB codes were used to test the reconstruction method and to optimize the device geometry. Results: The results presented in this paper show a 3D position reconstruction accuracy of the seed in the range of 0.5–3 mm for a 10–60 mm seed-to-detector distance interval (Z direction), respectively. The BrachyView system also demonstrates a spatial resolution of 0.25 mm in the XY plane for sources at 10 mm distance from Medipix2 detector plane, comparable to the theoretical value calculated for an equivalent gamma camera arrangement. The authors successfully demonstrated the capability of BrachyView for

  4. INFLUENCE OF THE VIEWING GEOMETRY WITHIN HYPERSPECTRAL IMAGES RETRIEVED FROM UAV SNAPSHOT CAMERAS

    OpenAIRE

    Aasen, Helge

    2016-01-01

    Hyperspectral data has great potential for vegetation parameter retrieval. However, due to angular effects resulting from different sun-surface-sensor geometries, objects might appear differently depending on the position of an object within the field of view of a sensor. Recently, lightweight snapshot cameras have been introduced, which capture hyperspectral information in two spatial and one spectral dimension and can be mounted on unmanned aerial vehicles. This study investigates th...

  5. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    Science.gov (United States)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  6. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  7. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  8. Small Field of View Scintimammography Gamma Camera Integrated to a Stereotactic Core Biopsy Digital X-ray System

    Energy Technology Data Exchange (ETDEWEB)

    Andrew Weisenberger; Fernando Barbosa; T. D. Green; R. Hoefer; Cynthia Keppel; Brian Kross; Stanislaw Majewski; Vladimir Popov; Randolph Wojcik

    2002-10-01

    A small field of view gamma camera has been developed for integration with a commercial stereotactic core biopsy system. The goal is to develop and implement a dual-modality imaging system utilizing scintimammography and digital radiography to evaluate the reliability of scintimammography in predicting the malignancy of suspected breast lesions from conventional X-ray mammography. The scintimammography gamma camera is a custom-built mini gamma camera with an active area of 5.3 cm /spl times/ 5.3 cm and is based on a 2 /spl times/ 2 array of Hamamatsu R7600-C8 position-sensitive photomultiplier tubes. The spatial resolution of the gamma camera at the collimator surface is < 4 mm full-width at half-maximum and a sensitivity of /spl sim/ 4000 Hz/mCi. The system is also capable of acquiring dynamic scintimammographic data to allow for dynamic uptake studies. Sample images of preliminary clinical results are presented to demonstrate the performance of the system.

  9. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  10. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  11. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    Science.gov (United States)

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  12. Edge turbulence measurement in Heliotron J using a combination of hybrid probe system and fast cameras

    International Nuclear Information System (INIS)

    Nishino, N.; Zang, L.; Takeuchi, M.; Mizuuchi, T.; Ohshima, S.; Kasajima, K.; Sha, M.; Mukai, K.; Lee, H.Y.; Nagasaki, K.; Okada, H.; Minami, T.; Kobayashi, S.; Yamamoto, S.; Konoshima, S.; Nakamura, Y.; Sano, F.

    2013-01-01

    The hybrid probe system (a combination of Langmuir probes and magnetic probes), fast camera and gas puffing system were installed at the same toroidal section to study edge plasma turbulence/fluctuation in Heliotron J, especially blob (intermittent filament). Fast camera views the location of the probe head, so that the probe system yields the time evolution of the turbulence/fluctuation while the camera images the spatial profile. Gas puff at the same toroidal section was used to control the plasma density and simultaneous gas puff imaging technique. Using this combined system the filamentary structure associated with magnetic fluctuation was found in Heliotron J at the first time. The other kind of fluctuation was also observed at another experiment. This combination measurement enables us to distinguish MHD activity and electro-static activity

  13. Endoscopic Camera Control by Head Movements for Thoracic Surgery

    NARCIS (Netherlands)

    Reilink, Rob; de Bruin, Gart; Franken, M.C.J.; Mariani, Massimo A.; Misra, Sarthak; Stramigioli, Stefano

    2010-01-01

    In current video-assisted thoracic surgery, the endoscopic camera is operated by an assistant of the surgeon, which has several disadvantages. This paper describes a system which enables the surgeon to control the endoscopic camera without the help of an assistant. The system is controlled using

  14. Adaptive strategies of remote systems operators exposed to perturbed camera-viewing conditions

    Science.gov (United States)

    Stuart, Mark A.; Manahan, Meera K.; Bierschwale, John M.; Sampaio, Carlos E.; Legendre, A. J.

    1991-01-01

    This report describes a preliminary investigation of the use of perturbed visual feedback during the performance of simulated space-based remote manipulation tasks. The primary objective of this NASA evaluation was to determine to what extent operators exhibit adaptive strategies which allow them to perform these specific types of remote manipulation tasks more efficiently while exposed to perturbed visual feedback. A secondary objective of this evaluation was to establish a set of preliminary guidelines for enhancing remote manipulation performance and reducing the adverse effects. These objectives were accomplished by studying the remote manipulator performance of test subjects exposed to various perturbed camera-viewing conditions while performing a simulated space-based remote manipulation task. Statistical analysis of performance and subjective data revealed that remote manipulation performance was adversely affected by the use of perturbed visual feedback and performance tended to improve with successive trials in most perturbed viewing conditions.

  15. Structured Light-Based Motion Tracking in the Limited View of an MR Head Coil

    DEFF Research Database (Denmark)

    Erikshøj, M.; Olesen, Oline Vinter; Conradsen, Knut

    2013-01-01

    A markerless motion tracking (MT) system developed for use in PET brain imaging has been tested in the limited field of view (FOV) of the MR head coil from the Siemens Biograph mMR. The system is a 3D surface scanner that uses structured light (SL) to create point cloud reconstructions of the fac......A markerless motion tracking (MT) system developed for use in PET brain imaging has been tested in the limited field of view (FOV) of the MR head coil from the Siemens Biograph mMR. The system is a 3D surface scanner that uses structured light (SL) to create point cloud reconstructions...

  16. Photographic measurement of head and cervical posture when viewing mobile phone: a pilot study.

    Science.gov (United States)

    Guan, Xiaofei; Fan, Guoxin; Wu, Xinbo; Zeng, Ying; Su, Hang; Gu, Guangfei; Zhou, Qi; Gu, Xin; Zhang, Hailong; He, Shisheng

    2015-12-01

    With the dramatic growth of mobile phone usage, concerns have been raised with regard to the adverse health effects of mobile phone on spinal posture. The aim of this study was to determine the head and cervical postures by photogrammetry when viewing the mobile phone screen, compared with those in neutral standing posture. A total of 186 subjects (81 females and 105 males) aged from 17 to 31 years old participated in this study. Subjects were instructed to stand neutrally and using mobile phone as in daily life. Using a photographic method, the sagittal head and cervical postures were assessed by head tilt angle, neck tilt angle, forward head shift and gaze angle. The photographic method showed a high intra-rater and inter-rater reliability in measuring the sagittal posture of cervical spine and gaze angle (ICCs ranged from 0.80 to 0.99). When looking at mobile phone, the head tilt angle significantly increased (from 74.55° to 95.22°, p = 0.000) and the neck angle decreased (from 54.68° to 38.77°, p = 0.000). The forward head posture was also confirmed by the significantly increased head shift (from 10.90 to 13.85 cm, p = 0.000). The posture assumed in mobile phone use was significantly correlated with neutral posture (p phone use. Compared to neutral standing, subjects display a more forward head posture when viewing the mobile phone screen, which is correlated with neutral posture, gaze angle and gender. Future studies will be needed to investigate a dose-response relationship between mobile phone use and assumed posture.

  17. Initial clinical experience with dedicated ultra fast solid state cardiac gamma camera

    International Nuclear Information System (INIS)

    Aland, Nusrat; Lele, V.

    2010-01-01

    detector reducing camera related motion artifacts. Diagnostic performance was comparable to that of standard dual detector gamma camera images. Mid septum invariably showed perfusion defects in QPS protocol, this probably was due to lack of normal database for solid state detector. Lung activity could not be visualized due to small field of view. Extra cardiac activity could be assessed. CONCLUSION: We preferred solid state cardiac gamma camera over conventional dual detector gamma camera for myocardial perfusion imaging. Advantages of solid state gamma camera over standard dual head gamma camera: 1. Faster acquisition time 2. Increased patient comfort 3. Less radiation dose to the patient 4. Brighter images 5. No motion artifact 6. Better right ventricular imaging Disadvantages 1. Extra cardiac activity cannot be assessed 2. Lung activity not seen due to small field of view 3. Invariably septal perfusion defects noted

  18. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Oates, L.; Bibbo, G.

    2000-01-01

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  19. Principal axis-based correspondence between multiple cameras for people tracking.

    Science.gov (United States)

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  20. Rapid evaluation of FDG imaging alternatives using head-to-head comparisons of full ring and gamma camera based PET scanners- a systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Haslinghuis-Bajan, L.M.; Lingen, A. van; Mijnhout, G.S.; Teule, G.J.J. [Dept. of Nuclear Medicine, Vrije Univ. Medical Centre, Amsterdam (Netherlands); Hooft, L. [Dept. of Clinical Epidemiology and Biostatistics, Vrije Univ. Medical Centre, Amsterdam (Netherlands); Tulder, M. van [Dept. of Clinical Epidemiology and Biostatistics, Vrije Univ. Medical Centre, Amsterdam (Netherlands); Inst. for Research in Extramural Medicine, Vrije Univ., Medical Centre, Amsterdam (Netherlands); Deville, W. [Inst. for Research in Extramural Medicine, Vrije Univ., Medical Centre, Amsterdam (Netherlands); Hoekstra, O.S. [Dept. of Nuclear Medicine, Vrije Univ. Medical Centre, Amsterdam (Netherlands); Dept. of Clinical Epidemiology and Biostatistics, Vrije Univ. Medical Centre, Amsterdam (Netherlands)

    2002-10-01

    Aim: While FDG full ring PET (FRPET) has been gradually accepted in oncology, the role of the cheaper gamma camera based alternatives (GCPET) is less clear. Since technology is evolving rapidly, ''tracker trials'' would be most helpful to provide a first approximation of the relative merits of these alternatives. As difference in scanner sensitivity is the key variable, head-to-head comparison with FRPET is an attractive study design. This systematic review summarises such studies. Methods: Nine studies were identified until July 1, 2000. Two observers assessed the methodological quality (Cochrane criteria), and extracted data. Results: The studies comprised a variety of tumours and indications. The reported GC- and FRPET agreement for detection of malignant lesions ranged from 55 to 100%, but with methodological limitations (blinding, standardisation, limited patient spectrum). Mean lesion diameter was 2.9 cm (SD 1.8), with only about 20% <1.5 cm. The 3 studies with the highest quality reported concordances of 74-79%, for the studied lesion spectrum. Contrast at GCPET was lower than that of FRPET, contrast and detection agreement were positively related. Logistic regression analysis suggested that pre-test indicators might be used to predict FRPET-GCPET concordance. Conclusion: In spite of methodological limitations, ''first generation'' GCPET devices detected sufficient FRPET positive lesions to allow prospective evaluation in clinical situations where the impact of FRPET is not confined to detection of small lesions (<1.5 cm). The efficiency of head-to-head comparative studies would benefit from application in a clinically relevant patient spectrum, with proper blinding and standardisation of acquisition procedures. (orig.)

  1. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    Directory of Open Access Journals (Sweden)

    M. Simi

    2012-01-01

    Full Text Available A novel compliant Magnetic Levitation System (MLS for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The head module incorporates two motorized donut-shaped magnets and a miniaturized vision system at the tip. The compliant MLS can exploit the static external magnetic field to induce a smooth bending of the robotic head (0–80°, guaranteeing a wide span tilt motion of the point of view. A nonlinear mathematical model for compliant beam was developed and solved analytically in order to describe and predict the trajectory behaviour of the system for different structural parameters. The entire device is 95 mm long and 12.7 mm in diameter. Use of such a robot in single port or standard multiport laparoscopy could enable a reduction of the number or size of ancillary trocars, or increase the number of working devices that can be deployed, thus paving the way for multiple view point laparoscopy.

  2. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  3. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  4. Person Re-Identification by Camera Correlation Aware Feature Augmentation.

    Science.gov (United States)

    Chen, Ying-Cong; Zhu, Xiatian; Zheng, Wei-Shi; Lai, Jian-Huang

    2018-02-01

    The challenge of person re-identification (re-id) is to match individual images of the same person captured by different non-overlapping camera views against significant and unknown cross-view feature distortion. While a large number of distance metric/subspace learning models have been developed for re-id, the cross-view transformations they learned are view-generic and thus potentially less effective in quantifying the feature distortion inherent to each camera view. Learning view-specific feature transformations for re-id (i.e., view-specific re-id), an under-studied approach, becomes an alternative resort for this problem. In this work, we formulate a novel view-specific person re-identification framework from the feature augmentation point of view, called Camera coR relation Aware Feature augmenTation (CRAFT). Specifically, CRAFT performs cross-view adaptation by automatically measuring camera correlation from cross-view visual data distribution and adaptively conducting feature augmentation to transform the original features into a new adaptive space. Through our augmentation framework, view-generic learning algorithms can be readily generalized to learn and optimize view-specific sub-models whilst simultaneously modelling view-generic discrimination information. Therefore, our framework not only inherits the strength of view-generic model learning but also provides an effective way to take into account view specific characteristics. Our CRAFT framework can be extended to jointly learn view-specific feature transformations for person re-id across a large network with more than two cameras, a largely under-investigated but realistic re-id setting. Additionally, we present a domain-generic deep person appearance representation which is designed particularly to be towards view invariant for facilitating cross-view adaptation by CRAFT. We conducted extensively comparative experiments to validate the superiority and advantages of our proposed framework over state

  5. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  6. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  7. Optimisation of a dual head semiconductor Compton camera using Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom)], E-mail: ljh@ns.ph.liv.ac.uk; Boston, A.J.; Boston, H.C.; Cooper, R.J.; Cresswell, J.R.; Grint, A.N.; Nolan, P.J.; Oxley, D.C.; Scraggs, D.P. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom); Beveridge, T.; Gillam, J. [School of Physics and Materials Engineering, Monash University, Melbourne (Australia); Lazarus, I. [STFC Daresbury Laboratory, Warrington, Cheshire (United Kingdom)

    2009-06-01

    Conventional medical gamma-ray camera systems utilise mechanical collimation to provide information on the position of an incident gamma-ray photon. Systems that use electronic collimation utilising Compton image reconstruction techniques have the potential to offer huge improvements in sensitivity. Position sensitive high purity germanium (HPGe) detector systems are being evaluated as part of a single photon emission computed tomography (SPECT) Compton camera system. Data have been acquired from the orthogonally segmented planar SmartPET detectors, operated in Compton camera mode. The minimum gamma-ray energy which can be imaged by the current system in Compton camera configuration is 244 keV due to the 20 mm thickness of the first scatter detector which causes large gamma-ray absorption. A simulation package for the optimisation of a new semiconductor Compton camera has been developed using the Geant4 toolkit. This paper will show results of preliminary analysis of the validated Geant4 simulation for gamma-ray energies of SPECT, 141 keV.

  8. Head stabilization in whooping cranes

    Science.gov (United States)

    Kinloch, M.R.; Cronin, T.W.; Olsen, Glenn H.; Chavez-Ramirez, Felipe

    2005-01-01

    The whooping crane (Grus americana) is the tallest bird in North America, yet not much is known about its visual ecology. How these birds overcome their unusual height to identify, locate, track, and capture prey items is not well understood. There have been many studies on head and eye stabilization in large wading birds (herons and egrets), but the pattern of head movement and stabilization during foraging is unclear. Patterns of head movement and stabilization during walking were examined in whooping cranes at Patuxent Wildlife Research Center, Laurel, Maryland USA. Four whooping cranes (1 male and 3 females) were videotaped for this study. All birds were already acclimated to the presence of people and to food rewards. Whooping cranes were videotaped using both digital and Hi-8 Sony video cameras (Sony Corporation, 7-35 Kitashinagawa, 6-Chome, Shinagawa-ku, Tokyo, Japan), placed on a tripod and set at bird height in the cranes' home pens. The cranes were videotaped repeatedly, at different locations in the pens and while walking (or running) at different speeds. Rewards (meal worms, smelt, crickets and corn) were used to entice the cranes to walk across the camera's view plane. The resulting videotape was analyzed at the University of Maryland at Baltimore County. Briefly, we used a computerized reduced graphic model of a crane superimposed over each frame of analyzed tape segments by means of a custom written program (T. W. Cronin, using C++) with the ability to combine video and computer graphic input. The speed of the birds in analyzed segments ranged from 0.30 m/s to 2.64 m/s, and the proportion of time the head was stabilized ranged from 79% to 0%, respectively. The speed at which the proportion reached 0% was 1.83 m/s. The analyses suggest that the proportion of time the head is stable decreases as speed of the bird increases. In all cases, birds were able to reach their target prey with little difficulty. Thus when cranes are walking searching for food

  9. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  10. Heart imaging by cadmium telluride gamma camera European Program 'BIOMED' consortium

    CERN Document Server

    Scheiber, C; Chambron, J; Prat, V; Kazandjan, A; Jahnke, A; Matz, R; Thomas, S; Warren, S; Hage-Hali, M; Regal, R; Siffert, P; Karman, M

    1999-01-01

    Cadmium telluride semiconductor detectors (CdTe) operating at room temperature are attractive for medical imaging because of their good energy resolution providing excellent spatial and contrast resolution. The compactness of the detection system allows the building of small light camera heads which can be used for bedside imaging. A mobile pixellated gamma camera based on 2304 CdTe (pixel size: 3x3 mm, field of view: 15 cmx15 cm) has been designed for cardiac imaging. A dedicated 16-channel integrated circuit has also been designed. The acquisition hardware is fully programmable (DSP card, personal computer-based system). Analytical calculations have shown that a commercial parallel hole collimator will fit the efficiency/resolution requirements for cardiac applications. Monte-Carlo simulations predict that the Moire effect can be reduced by a 15 deg. tilt of the collimator with respect to the detector grid. A 16x16 CdTe module has been built for the preliminary physical tests. The energy resolution was 6.16...

  11. Use of dual-head gamma camera in radionuclide internal contamination monitoring on radiation workers from a nuclear medicine department

    International Nuclear Information System (INIS)

    Rodriguez-Laguna, A.; Brandan, M.E.

    2008-01-01

    As a part of an internal dosimetry program that is performed at the Mexican National Institute of Cancerology - Nuclear Medicine Department, in the present work we suggest a procedure for the routinely monitoring of internal contamination on radiation workers and nuclear medicine staff. The procedure is based on the identification and quantification of contaminating radionuclides in human body by using a dual-head whole-body gamma camera. The results have shown that the procedures described in this study can be used to implement a method to quantify minimal accumulated activity in the main human organs to evaluate internal contamination with radionuclides. The high sensitivity of the uncollimated gamma camera is advantageous for the routinely detection and identification of small activities of internal contamination. But, the null spatial resolution makes impossible the definition of contaminated region of interest. Then, the use of collimators is necessary to the quantification of incorporated radionuclides activities in the main human organs and for the internal doses assessment. (author)

  12. Head-mounted eye tracking of a chimpanzee under naturalistic conditions.

    Directory of Open Access Journals (Sweden)

    Fumihiro Kano

    Full Text Available This study offers a new method for examining the bodily, manual, and eye movements of a chimpanzee at the micro-level. A female chimpanzee wore a lightweight head-mounted eye tracker (60 Hz on her head while engaging in daily interactions with the human experimenter. The eye tracker recorded her eye movements accurately while the chimpanzee freely moved her head, hands, and body. Three video cameras recorded the bodily and manual movements of the chimpanzee from multiple angles. We examined how the chimpanzee viewed the experimenter in this interactive setting and how the eye movements were related to the ongoing interactive contexts and actions. We prepared two experimentally defined contexts in each session: a face-to-face greeting phase upon the appearance of the experimenter in the experimental room, and a subsequent face-to-face task phase that included manual gestures and fruit rewards. Overall, the general viewing pattern of the chimpanzee, measured in terms of duration of individual fixations, length of individual saccades, and total viewing duration of the experimenter's face/body, was very similar to that observed in previous eye-tracking studies that used non-interactive situations, despite the differences in the experimental settings. However, the chimpanzee viewed the experimenter and the scene objects differently depending on the ongoing context and actions. The chimpanzee viewed the experimenter's face and body during the greeting phase, but viewed the experimenter's face and hands as well as the fruit reward during the task phase. These differences can be explained by the differential bodily/manual actions produced by the chimpanzee and the experimenter during each experimental phase (i.e., greeting gestures, task cueing. Additionally, the chimpanzee's viewing pattern varied depending on the identity of the experimenter (i.e., the chimpanzee's prior experience with the experimenter. These methods and results offer new

  13. INFLUENCE OF THE VIEWING GEOMETRY WITHIN HYPERSPECTRAL IMAGES RETRIEVED FROM UAV SNAPSHOT CAMERAS

    Directory of Open Access Journals (Sweden)

    H. Aasen

    2016-06-01

    Full Text Available Hyperspectral data has great potential for vegetation parameter retrieval. However, due to angular effects resulting from different sun-surface-sensor geometries, objects might appear differently depending on the position of an object within the field of view of a sensor. Recently, lightweight snapshot cameras have been introduced, which capture hyperspectral information in two spatial and one spectral dimension and can be mounted on unmanned aerial vehicles. This study investigates the influence of the different viewing geometries within an image on the apparent hyperspectral reflection retrieved by these sensors. Additionally, it is evaluated how hyperspectral vegetation indices like the NDVI are effected by the angular effects within a single image and if the viewing geometry influences the apparent heterogeneity with an area of interest. The study is carried out for a barley canopy at booting stage. The results show significant influences of the position of the area of interest within the image. The red region of the spectrum is more influenced by the position than the near infrared. The ability of the NDVI to compensate these effects was limited to the capturing positions close to nadir. The apparent heterogeneity of the area of interest is the highest close to a nadir.

  14. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Sicart, Sergi; Paredes, Pilar [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain); Institut d' Investigacio Biomedica Agusti Pi Sunyer (IDIBAPS), Barcelona (Spain); Vermeeren, Lenka; Valdes-Olmos, Renato A. [Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital (NKI-AVL), Nuclear Medicine Department, Amsterdam (Netherlands); Sola, Oriol [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain)

    2011-04-15

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ({sup 99m}Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  15. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    International Nuclear Information System (INIS)

    Vidal-Sicart, Sergi; Paredes, Pilar; Vermeeren, Lenka; Valdes-Olmos, Renato A.; Sola, Oriol

    2011-01-01

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ( 99m Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  16. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    Science.gov (United States)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  17. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  18. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrance, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-01-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect the vacuum vessel internal structures in both the visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diam fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35-mm Nikon F3 still camera, or (5) a 16-mm Locam II movie camera with variable framing rate up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  19. CCD camera system for use with a streamer chamber

    International Nuclear Information System (INIS)

    Angius, S.A.; Au, R.; Crawley, G.C.; Djalali, C.; Fox, R.; Maier, M.; Ogilvie, C.A.; Molen, A. van der; Westfall, G.D.; Tickle, R.S.

    1988-01-01

    A system based on three charge-coupled-device (CCD) cameras is described here. It has been used to acquire images from a streamer chamber and consists of three identical subsystems, one for each camera. Each subsystem contains an optical lens, CCD camera head, camera controller, an interface between the CCD and a microprocessor, and a link to a minicomputer for data recording and on-line analysis. Image analysis techniques have been developed to enhance the quality of the particle tracks. Some steps have been made to automatically identify tracks and reconstruct the event. (orig.)

  20. Conceptual design of a neutron camera for MAST Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Weiszflog, M., E-mail: matthias.weiszflog@physics.uu.se; Sangaroon, S.; Cecconello, M.; Conroy, S.; Ericsson, G.; Klimek, I. [Department of Physics and Astronomy, Uppsala University, EURATOM-VR Association, Uppsala (Sweden); Keeling, D.; Martin, R. [CCFE, Culham Science Centre, Abingdon (United Kingdom); Turnyanskiy, M. [ITER Physics Department, EFDA CSU Garching, Boltzmannstrae 2, D-85748 Garching (Germany)

    2014-11-15

    This paper presents two different conceptual designs of neutron cameras for Mega Ampere Spherical Tokamak (MAST) Upgrade. The first one consists of two horizontal cameras, one equatorial and one vertically down-shifted by 65 cm. The second design, viewing the plasma in a poloidal section, also consists of two cameras, one radial and the other one with a diagonal view. Design parameters for the different cameras were selected on the basis of neutron transport calculations and on a set of target measurement requirements taking into account the predicted neutron emissivities in the different MAST Upgrade operating scenarios. Based on a comparison of the cameras’ profile resolving power, the horizontal cameras are suggested as the best option.

  1. Quality control in dual head γ-cameras: comparison between methods and software s used for image analysis

    International Nuclear Information System (INIS)

    Nayl E, A.; Fornasier, M. R.; De Denaro, M.; Sulieman, A.; Alkhorayef, M.; Bradley, D.

    2017-10-01

    Patient radiation dose and image quality are the main issues in nuclear medicine (Nm) procedures. Currently, many protocols are used for image acquisition and analysis of quality control (Qc) tests. National Electrical Manufacturers Association (Nema) methods and protocols are widely accepted method used for providing accurate description, measurement and reporting of γ-camera performance parameters. However, no standard software is available for image analysis. The aim os this study was to compare between the vendor Qc software analysis and three software from different developers downloaded free from internet; NMQC, Nm Tool kit and ImageJ-Nm Tool kit software. The three software are used for image analysis of some Qc tests for γ-cameras based on Nema protocols including non-uniformity evaluation. Ten non-uniformity Qc images were taken from dual head γ-camera (Siemens Symbia) installed in Trieste general hospital (Italy), and analyzed. Excel analysis was used as baseline calculation of the non-uniformity test according Nema procedures. The results of the non-uniformity analysis showed good agreement between the three independent software and excel calculation (the average differences were 0.3%, 2.9%, 1.3% and 1.6% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively), while significant difference was detected on the analysis of the company Qc software with compare to the excel analysis (the average differences were 14.6%, 20.7%, 25.7% and 31.9% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively). NMQC software was the best in comparison with the excel calculations. The variation in the results is due to different pixel sizes used for analysis in the three software and the γ-camera Qc software. Therefore, is important to perform the tests by the vendor Qc software as well as by independent analysis to understand the differences between the values. Moreover, the medical physicist should know

  2. Quality control in dual head γ-cameras: comparison between methods and software s used for image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nayl E, A. [Sudan Atomic Energy Commission, Radiation Safety Institute, Khartoum (Sudan); Fornasier, M. R.; De Denaro, M. [Azienda Sanitaria Universitaria Integrata di Trieste, Medical Physics Department, Via Giovanni Sai 7, 34128 Trieste (Italy); Sulieman, A. [Prince Sattam bin Abdulaziz University, College of Applied Medical Sciences, Radiology and Medical Imaging Department, P. O. Box 422, 11942 Al-Kharj (Saudi Arabia); Alkhorayef, M.; Bradley, D., E-mail: abdwsh10@hotmail.com [University of Surrey, Department of Physics, GU2-7XH Guildford, Surrey (United Kingdom)

    2017-10-15

    Patient radiation dose and image quality are the main issues in nuclear medicine (Nm) procedures. Currently, many protocols are used for image acquisition and analysis of quality control (Qc) tests. National Electrical Manufacturers Association (Nema) methods and protocols are widely accepted method used for providing accurate description, measurement and reporting of γ-camera performance parameters. However, no standard software is available for image analysis. The aim os this study was to compare between the vendor Qc software analysis and three software from different developers downloaded free from internet; NMQC, Nm Tool kit and ImageJ-Nm Tool kit software. The three software are used for image analysis of some Qc tests for γ-cameras based on Nema protocols including non-uniformity evaluation. Ten non-uniformity Qc images were taken from dual head γ-camera (Siemens Symbia) installed in Trieste general hospital (Italy), and analyzed. Excel analysis was used as baseline calculation of the non-uniformity test according Nema procedures. The results of the non-uniformity analysis showed good agreement between the three independent software and excel calculation (the average differences were 0.3%, 2.9%, 1.3% and 1.6% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively), while significant difference was detected on the analysis of the company Qc software with compare to the excel analysis (the average differences were 14.6%, 20.7%, 25.7% and 31.9% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively). NMQC software was the best in comparison with the excel calculations. The variation in the results is due to different pixel sizes used for analysis in the three software and the γ-camera Qc software. Therefore, is important to perform the tests by the vendor Qc software as well as by independent analysis to understand the differences between the values. Moreover, the medical physicist should know

  3. Universal crystal cooling device for precession cameras, rotation cameras and diffractometers

    International Nuclear Information System (INIS)

    Hajdu, J.; McLaughlin, P.J.; Helliwell, J.R.; Sheldon, J.; Thompson, A.W.

    1985-01-01

    A versatile crystal cooling device is described for macromolecular crystallographic applications in the 290 to 80 K temperature range. It utilizes a fluctuation-free cold-nitrogen-gas supply, an insulated Mylar crystal cooling chamber and a universal ball joint, which connects the cooling chamber to the goniometer head and the crystal. The ball joint is a novel feature over all previous designs. As a result, the device can be used on various rotation cameras, precession cameras and diffractometers. The lubrication of the interconnecting parts with graphite allows the cooling chamber to remain stationary while the crystal and goniometer rotate. The construction allows for 360 0 rotation of the crystal around the goniometer axis and permits any settings on the arcs and slides of the goniometer head (even if working at 80 K). There are no blind regions associated with the frame holding the chamber. Alternatively, the interconnecting ball joint can be tightened and fixed. This results in a set up similar to the construction described by Bartunik and Schubert where the cooling chamber rotates with the crystal. The flexibility of the systems allows for the use of the device on most cameras or diffractometers. THis device has been installed at the protein crystallographic stations of the Synchrotron Radiation Source at Daresbury Laboratory and in the Laboratory of Molecular Biophysics, Oxford. Several data sets have been collected with processing statistics typical of data collected without a cooling chamber. Tests using the full white beam of the synchrotron also look promising. (orig./BHO)

  4. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  5. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  6. F-18 FDG PET with coincidence detection, dual-head gamma camera, initial experience in oncology

    Energy Technology Data Exchange (ETDEWEB)

    Chu, J.M.G.; Pocock, N.; Quach, T.; Camden, B.M.C. [Liverpool Health Services, Liverpool, NSW (Australia). Department of Nuclear Medicine and Clinical Ultrasound

    1998-06-01

    Full text: The development of Co-incidence Detection (CD) in gamma camera technology has allowed the use of positron radiopharmaceuticals in clinical practice without dedicated PET facilities. We report our initial experience of this technology in Oncological applications. All patients were administered 200 MBq of F- 18 FDG intravenously in a fasting state, with serum glucose below 8.9 mmol/L., and hydration well maintained. Tomography was performed using an ADAC Solus Molecular Co-incidence Detection (MCD) dual-head gamma camera, 60 minutes after administration and immediately after voiding. Tomography of the torso required up to three collections depending on the length of the patient, with each collection requiring 32 steps of 40 second duration, and a 50% overlap. Tomography of the brain required a single collection with 32 steps of 80 seconds. Patients were scanned in the supine position. An iterative reconstruction algorithm was employed without attenuation correction. All patients had histologically confirmed malignancy. Scan findings were correlated with results of all conventional diagnostic imaging procedures that were pertinent to the evaluation and management of each individual patient`s disease. Correlation with tumour type and treatment status was also undertaken. F-18 FDG uptake as demonstrated by CD-PET was increased in tumour bearing sites. The degree of increased uptake varied with tumour type and with treatment status. Our initial experience with CD-PET has been very encouraging, and has led us to undertake prospective short and long term studies to define its role in oncology

  7. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  8. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang; Liu, Yebin; Heidrich, Wolfgang; Dai, Qionghai

    2016-01-01

    camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional

  9. A universal multiprocessor system for the fast acquisition and processing of positron camera data

    International Nuclear Information System (INIS)

    Deluigi, B.

    1982-01-01

    In this study the main components of a suitable detection system were worked out, and their properties were examined. For the measurement of the three-dimensional distribution of radiopharmaka marked by positron emitters in animal-experimental studies first a positron camera was constructed. For the detection of the annihilation quanta serve two opposite lying position-sensitive gamma detectors which are derived in coincidence. Two commercial camera heads working according to the Anger principle were reconstructed for these purposes and switched together by a special interface to the positron camera. By this arrangement a spatial resolution of 0.8 cm FWHM for a line source in the symmetry plane and a coincidence resolution time 2T of 16ns FW0.1M was reached. For the three-dimensional image reconstruction with the data of a positron camera a maximum-likelihood procedure was developed and tested by a Monte Carlo procedure. In view of this application an at most flexible multi-microprocessor system was developed. A high computing capacity is reached owing to the fact that several partial problems are distributed to different processors and are processed parallely. The architecture was so scheduled that the system possesses a high error tolerance and that the computing capacity can be extended without a principal limit. (orig./HSI) [de

  10. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  11. What are Head Cavities? - A History of Studies on Vertebrate Head Segmentation.

    Science.gov (United States)

    Kuratani, Shigeru; Adachi, Noritaka

    2016-06-01

    Motivated by the discovery of segmental epithelial coeloms, or "head cavities," in elasmobranch embryos toward the end of the 19th century, the debate over the presence of mesodermal segments in the vertebrate head became a central problem in comparative embryology. The classical segmental view assumed only one type of metamerism in the vertebrate head, in which each metamere was thought to contain one head somite and one pharyngeal arch, innervated by a set of cranial nerves serially homologous to dorsal and ventral roots of spinal nerves. The non-segmental view, on the other hand, rejected the somite-like properties of head cavities. A series of small mesodermal cysts in early Torpedo embryos, which were thought to represent true somite homologs, provided a third possible view on the nature of the vertebrate head. Recent molecular developmental data have shed new light on the vertebrate head problem, explaining that head mesoderm evolved, not by the modification of rostral somites of an amphioxus-like ancestor, but through the polarization of unspecified paraxial mesoderm into head mesoderm anteriorly and trunk somites posteriorly.

  12. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  13. Value Added: the Case for Point-of-View Camera use in Orthopedic Surgical Education.

    Science.gov (United States)

    Karam, Matthew D; Thomas, Geb W; Taylor, Leah; Liu, Xiaoxing; Anthony, Chris A; Anderson, Donald D

    2016-01-01

    Orthopedic surgical education is evolving as educators search for new ways to enhance surgical skills training. Orthopedic educators should seek new methods and technologies to augment and add value to real-time orthopedic surgical experience. This paper describes a protocol whereby we have started to capture and evaluate specific orthopedic milestone procedures with a GoPro® point-of-view video camera and a dedicated video reviewing website as a way of supplementing the current paradigm in surgical skills training. We report our experience regarding the details and feasibility of this protocol. Upon identification of a patient undergoing surgical fixation of a hip or ankle fracture, an orthopedic resident places a GoPro® point-of-view camera on his or her forehead. All fluoroscopic images acquired during the case are saved and later incorporated into a video on the reviewing website. Surgical videos are uploaded to a secure server and are accessible for later review and assessment via a custom-built website. An electronic survey of resident participants was performed utilizing Qualtrics software. Results are reported using descriptive statistics. A total of 51 surgical videos involving 23 different residents have been captured to date. This includes 20 intertrochanteric hip fracture cases and 31 ankle fracture cases. The average duration of each surgical video was 1 hour and 16 minutes (range 40 minutes to 2 hours and 19 minutes). Of 24 orthopedic resident surgeons surveyed, 88% thought capturing a video portfolio of orthopedic milestones would benefit their education. There is a growing demand in orthopedic surgical education to extract more value from each surgical experience. While further work in development and refinement of such assessments is necessary, we feel that intraoperative video, particularly when captured and presented in a non-threatening, user friendly manner, can add significant value to the present and future paradigm of orthopedic surgical

  14. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  15. CBF tomographic measurement with the scintillation camera

    International Nuclear Information System (INIS)

    Kayayan, R.; Philippon, B.; Pehlivanian, E.

    1989-01-01

    Single photon emission tomography (SPECT) allows calculation of regional cerebral blood flow (CBF) in multiple cross-sections of the human brain. The methods of Kanno and Lassen is utilized and a study of reproductibility in terms of integration numbers and period of integrations is performed by computer simulation and experimental study with a Gamma-camera. Finally, the possibility of calculating the regional cerabral blood flow with a double headed rotating Gamma-camera by inert gas inhalation, like the Xenon-133 is discussed [fr

  16. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    Science.gov (United States)

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  17. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  18. Experience with dedicated ultra fast solid state cardiac gamma camera: technologist perspective

    International Nuclear Information System (INIS)

    Parab, Anil; Gaikar, Anil; Patil, Kashinath; Lele, V.

    2010-01-01

    Full text: To describe technologist perspective of working with ultra fast solid state gamma camera and comparison with conventional dual head gamma camera. Material and Methods: 900 Myocardial Perfusion scan were carried out on dedicated solid state detector cardiac camera between 1st February 2010 till 29th August 2010. 27 studies were done back to back on a conventional dual head gamma camera. In 2 cases dual head isotope imaging was done (Thallium+ 99m Tc-tetrofosmin). Rest stress protocol was used in 600 patients and stress - rest protocol was used in 300. 1:3 ratio of injected activity was maintained for both protocols (5 mCi for 1st study and 15 mCi for second study). For Rest - Stress protocol, 5 mCi of 99m Tc - Tetrofosmin was injected at rest, 40 minutes later, 5 min image was acquired on the solid state detector. Patient was then stressed. 15 mCi 99m Tc - Tetrofosmin was injected at peak stress. Images were acquired 20 minutes later for 3 minutes (total duration of study 90-100 min). For stress rest protocol, 5 mCi 99m Tc - Tetrofosmin was injected at peak stress. 5 mCi images were acquired 20 minutes later. Rest injection of 15 mCi was given 1 hour post stress injection. Rest images were acquired 40 minutes after rest injection (total duration of study 110-120 min). Results: We observed even with lesser amount activity and acquisition time of 5 min/cardiac scan it showed high sensitivity count rate over 2.2-4.7 kcps (10 times more counts than standard gamma camera). System gives better energy resolution < 7%. Better image contrast. Dual isotope imaging can be possible. Spatial resolution 4.3-4.9 mm. Excellent quality images were obtained using low activities (5 mCi/15 mCi) using 1/3rd the acquisition time compared to conventional dual head gamma camera Even in obese patients 7 mCi/21 mCi activity yielded excellent images at 1/3 rd acquisition time Quick acquisition resulted in greater patient comfort and no motion artifact also due to non rotation of

  19. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  20. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  1. Observations of the Perseids 2012 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.

    2012-09-01

    The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories

  2. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  3. Localization and Mapping Using a Non-Central Catadioptric Camera System

    Science.gov (United States)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  4. Observations of the Perseids 2013 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km

  5. Audiovisual Head Orientation Estimation with Particle Filtering in Multisensor Scenarios

    Directory of Open Access Journals (Sweden)

    Javier Hernando

    2007-07-01

    Full Text Available This article presents a multimodal approach to head pose estimation of individuals in environments equipped with multiple cameras and microphones, such as SmartRooms or automatic video conferencing. Determining the individuals head orientation is the basis for many forms of more sophisticated interactions between humans and technical devices and can also be used for automatic sensor selection (camera, microphone in communications or video surveillance systems. The use of particle filters as a unified framework for the estimation of the head orientation for both monomodal and multimodal cases is proposed. In video, we estimate head orientation from color information by exploiting spatial redundancy among cameras. Audio information is processed to estimate the direction of the voice produced by a speaker making use of the directivity characteristics of the head radiation pattern. Furthermore, two different particle filter multimodal information fusion schemes for combining the audio and video streams are analyzed in terms of accuracy and robustness. In the first one, fusion is performed at a decision level by combining each monomodal head pose estimation, while the second one uses a joint estimation system combining information at data level. Experimental results conducted over the CLEAR 2006 evaluation database are reported and the comparison of the proposed multimodal head pose estimation algorithms with the reference monomodal approaches proves the effectiveness of the proposed approach.

  6. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  7. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  8. A Wireless Camera Node with Passive Self-righting Mechanism for Capturing Surrounding View

    OpenAIRE

    Kawabata, Kuniaki; Sato, Hideo; Suzuki, Tsuyoshi; Tobe, Yoshito

    2010-01-01

    In this report, we have proposed a sensor node and related wireless network for information gathering in disaster areas. We have described a “camera node” prototype developed on this basis, containing a camera with a fisheye lens, a passive self-righting mechanism to maintain the camera orientation, and the systems capability for construction of an ad hoc wireless network, together with a GPS adaptor and an embedded computer timer to identify its position and imaging time. The camera node...

  9. Depth Perception In Remote Stereoscopic Viewing Systems

    Science.gov (United States)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  10. Design and control of the Twente humanoid head

    NARCIS (Netherlands)

    Visser, L.C.; Carloni, Raffaella; Stramigioli, Stefano

    2009-01-01

    The Twente humanoid head features a four degree of freedom neck and two eyes that are implemented by using cameras. The cameras tilt on a common axis, but can rotate sideways independently, thus implementing another three degrees of freedom. A vision processing algorithm has been developed that

  11. A neural-based remote eye gaze tracker under natural head motion.

    Science.gov (United States)

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  12. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  13. Continuous monitoring of Hawaiian volcanoes with thermal cameras

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.

    2014-01-01

    Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.

  14. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  15. Real-time tracking for virtual environments using scaat kalman filtering and unsynchronised cameras

    DEFF Research Database (Denmark)

    Rasmussen, Niels Tjørnly; Störring, Morritz; Moeslund, Thomas B.

    2006-01-01

    This paper presents a real-time outside-in camera-based tracking system for wireless 3D pose tracking of a user’s head and hand in a virtual environment. The system uses four unsynchronised cameras as sensors and passive retroreflective markers arranged in rigid bodies as targets. In order to ach...

  16. In-vessel inspection before head removal: TMI II, Phase III (tooling and systems design and verification)

    International Nuclear Information System (INIS)

    Carter, G.S.; Ryan, R.F.; Pieleck, A.W.; Bibb, H.Q.

    1982-09-01

    Under EG and G contract K-9003 to General Public Utilities Corporation, a Task Order was assigned to Babcock and Wilcox to develop and provide equipment to facilitate early assessment of core damage in the Three Mile Island Unit 2 reactor vessel head. Described is the work performed, the equipment developed, and the tests conducted with this equipment on various mockups used to simulate the constraints inside and outside the reactor vessel that affect the performance of the inspection. The tooling developed provides several methods of removing a few control rod drive leadscrews from the reactor, thereby providing paths into which cameras and lights may be inserted to permit video viewing of many potentially damaged areas in the reactor vessel. The tools, equipment, and cameras demonstrated that these tasks could be accomplished

  17. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  18. A gamma camera count rate saturation correction method for whole-body planar imaging

    Science.gov (United States)

    Hobbs, Robert F.; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R.; Esaias, Caroline E.; Reinhardt, Melvin; Frey, Eric C.; Loeb, David M.; Sgouros, George

    2010-02-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating

  19. Measurement system for high-sensitivity LIBS analysis using ICCD camera in LabVIEW environment

    International Nuclear Information System (INIS)

    Zaytsev, S M; Popov, A M; Zorov, N B; Labutin, T A

    2014-01-01

    A measurement system based on ultrafast (up to 10 ns time resolution) intensified CCD detector ''Nanogate-2V'' (Nanoscan, Russia) was developed for high-sensitivity analysis by Laser-Induced Breakdown Spectrometry (LIBS). LabVIEW environment provided a high level of compatibility with variety of electronic instruments and an easy development of user interface, while Visual Studio environment was used for creation of LabVIEW compatible dll library with the use of ''Nanogate-2V'' SDK. The program for camera management and laser-induced plasma spectra registration was created with the use of Call Library Node in LabVIEW. An algorithm of integration of the second device ADC ''PCI-9812'' (ADLINK) to the measurement system was proposed and successfully implemented. This allowed simultaneous registration of emission and acoustic signals under laser ablation. The measured resolving power of spectrometer-ICCD system was equal to 12000 at 632 nm. An electron density of laser plasma was estimated with the use of H-α Balmer line. Steel spectra obtained at different delays were used for selection of the optimal conditions for manganese analytical signal registration. The feature of accumulation of spectra from several laser pulses was shown. The accumulation allowed reliable observation of silver signal at 328.07 nm in the LIBS spectra of soil (C Ag = 4.5 ppm). Finally, the correlation between acoustic and emission signals of plasma was found. Thus, technical possibilities of the developed LIBS system were demonstrated both for plasma diagnostics and analytical measurements

  20. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  1. TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS

    Directory of Open Access Journals (Sweden)

    V. P. Lazarenko

    2015-01-01

    Full Text Available Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses

  2. Video System for Viewing From a Remote or Windowless Cockpit

    Science.gov (United States)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  3. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  4. Introducing navigation during melanoma-related sentinel lymph node procedures in the head-and-neck region.

    Science.gov (United States)

    KleinJan, Gijs H; Karakullukçu, Baris; Klop, W Martin C; Engelen, Thijs; van den Berg, Nynke S; van Leeuwen, Fijs W B

    2017-08-17

    Intraoperative sentinel node (SN) identification in patients with head-and-neck malignancies can be challenging due to unexpected drainage patterns and anatomical complexity. Here, intraoperative navigation-based guidance technologies may provide outcome. In this study, gamma camera-based freehandSPECT was evaluated in combination with the hybrid tracer ICG- 99m Tc-nanocolloid. Eight patients with melanoma located in the head-and-neck area were included. Indocyanine green (ICG)- 99m Tc-nanocolloid was injected preoperatively, whereafter lymphoscintigraphy and SPECT/CT imaging were performed in order to define the location of the SN(s). FreehandSPECT scans were generated in the operation room using a portable gamma camera. For lesion localization during surgery, freehandSPECT scans were projected in an augmented reality video-view that was used to spatially position a gamma-ray detection probe. Intraoperative fluorescence imaging was used to confirm the accuracy of the navigation-based approach and identify the exact location of the SNs. Preoperatively, 15 SNs were identified, of which 14 were identified using freehandSPECT. Navigation towards these nodes using the freehandSPECT approach was successful in 13 nodes. Fluorescence imaging provided optical confirmation of the navigation accuracy in all patients. In addition, fluorescence imaging allowed for the identification of (clustered) SNs that could not be identified based on navigation alone. The use of gamma camera-based freehandSPECT aids intraoperative lesion identification and, with that, supports the transition from pre- to intraoperative imaging via augmented reality display and directional guidance.

  5. HST Solar Arrays photographed by Electronic Still Camera

    Science.gov (United States)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  6. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  7. Multi-view collimator for scintillation cameras

    International Nuclear Information System (INIS)

    Hatton, J.; Grenier, R.P.

    1979-01-01

    A collimator comprises a block or blocks of radiation-impervious material which defines a first plurality of parallel channels, each channel defining a direction of acceptance of radiation from a body. The axes of a second plurality channels define another direction of acceptance of radiation from the body and intersect the same portion of the body as the axes of the first plurality of channels thus producing a second view of the body. Where the collimator is built up as a stack of blocks, each pair of adjacent blocks defines a slice of the body which is viewed from two angles defined by the channels. (UK)

  8. New detection modules for gamma, beta and X-ray cameras

    International Nuclear Information System (INIS)

    Azman, S.; Bolle, E.; Dang, K.Q.; Dang, W.; Dietzel, K.I.; Froberg, T.; Gaarder, P.E.; Gjaerum, J.A.; Haugen, S.H.; Hellum, G.; Henriksen, J.R.; Johanson, T.M.; Kobbevik, A.; Maehlum, G.; Meier, D.; Mikkelsen, S.; Ninive, I.; Oya, P.; Pavlov, N.; Pettersen, D.M.; Sundal, B.M.; Talebi, J.; Yoshioka, K.

    2003-01-01

    Full text: Ideas ASA is developing new detection modules for gamma, beta and X-ray cameras. Recent developments focus on modules using various semi-conductor materials (CZT, HgI, Si). The development includes ASIC design, detector module development, and implementation in camera heads. In this presentation we describe the characteristics of important ASICs and its properties in terms of electronic noise, and the modes for measuring signals (switched current modes, sparsified modes, self triggered modes). The ASICs are specific for detectors and applications. We describe recent developments using various semi - conductor materials. We describe important design aspects for medical applications and in life science (SPECT, beta, X-ray cameras)

  9. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  10. A framework for multi-object tracking over distributed wireless camera networks

    Science.gov (United States)

    Gau, Victor; Hwang, Jenq-Neng

    2010-07-01

    In this paper, we propose a unified framework targeting at two important issues in a distributed wireless camera network, i.e., object tracking and network communication, to achieve reliable multi-object tracking over distributed wireless camera networks. In the object tracking part, we propose a fully automated approach for tracking of multiple objects across multiple cameras with overlapping and non-overlapping field of views without initial training. To effectively exchange the tracking information among the distributed cameras, we proposed an idle probability based broadcasting method, iPro, which adaptively adjusts the broadcast probability to improve the broadcast effectiveness in a dense saturated camera network. Experimental results for the multi-object tracking demonstrate the promising performance of our approach on real video sequences for cameras with overlapping and non-overlapping views. The modeling and ns-2 simulation results show that iPro almost approaches the theoretical performance upper bound if cameras are within each other's transmission range. In more general scenarios, e.g., in case of hidden node problems, the simulation results show that iPro significantly outperforms standard IEEE 802.11, especially when the number of competing nodes increases.

  11. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  12. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  13. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  14. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    1975-01-01

    A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  15. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  16. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    Villacorta, Edmundo V.

    1997-01-01

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  17. Motion Sickness When Driving With a Head-Slaved Camera System

    Science.gov (United States)

    2003-02-01

    YPR-765 under armour (Report TM-97-A026). Soesterberg, The Netherlands: TNO Human Factors Research Institute. Van Erp, J.B.F., Padmos, P. & Tenkink, E...Institute. Van Erp, J.B.F., Van den Dobbelsteen, J.J. & Padmos, P. (1998). Improved camera-monitor system for driving YPR-765 under armour (Report TM-98

  18. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  19. Distribution and Parameter's Calculations of Television Cameras Inside a Nuclear Facility

    International Nuclear Information System (INIS)

    El-kafas, A.A.

    2009-01-01

    In this work, a distribution of television cameras and parameter's calculation inside and outside a nuclear facility is presented. Each of exterior and interior camera systems will be described and explained. The work shows the overall closed circuit television system. Fixed and moving cameras with various lens format and different angles of view are used. The calculations of width of images sensitive area and Lens focal length for the cameras will be introduced. The work shows the camera locations and distributions inside and outside the nuclear facility. The technical specifications and parameters for cameras selection are tabulated

  20. Multi-person tracking with overlapping cameras in complex, dynamic environments

    NARCIS (Netherlands)

    Liem, M.; Gavrila, D.M.

    2009-01-01

    This paper presents a multi-camera system to track multiple persons in complex, dynamic environments. Position measurements are obtained by carving out the space defined by foreground regions in the overlapping camera views and projecting these onto blobs on the ground plane. Person appearance is

  1. The NIKA2 Large Field-of-View Millimeter Continuum Camera for the 30-M IRAM Telescope

    Science.gov (United States)

    Monfardini, Alessandro

    2018-01-01

    We have constructed and deployed a multi-thousands pixels dual-band (150 and 260 GHz, respectively 2mm and 1.15mm wavelengths) camera to image an instantaneous field-of-view of 6.5arc-min and configurable to map the linear polarization at 260GHz. We are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focusing on the cryogenics, the optics, the focal plane arrays based on Kinetic Inductance Detectors (KID) and the readout electronics. We are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-meter IRAM (Institute of Millimetric Radio Astronomy) telescope at Pico Veleta, and preliminary science-grade results.

  2. Video Analysis Verification of Head Impact Events Measured by Wearable Sensors.

    Science.gov (United States)

    Cortes, Nelson; Lincoln, Andrew E; Myer, Gregory D; Hepburn, Lisa; Higgins, Michael; Putukian, Margot; Caswell, Shane V

    2017-08-01

    Wearable sensors are increasingly used to quantify the frequency and magnitude of head impact events in multiple sports. There is a paucity of evidence that verifies head impact events recorded by wearable sensors. To utilize video analysis to verify head impact events recorded by wearable sensors and describe the respective frequency and magnitude. Cohort study (diagnosis); Level of evidence, 2. Thirty male (mean age, 16.6 ± 1.2 years; mean height, 1.77 ± 0.06 m; mean weight, 73.4 ± 12.2 kg) and 35 female (mean age, 16.2 ± 1.3 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) players volunteered to participate in this study during the 2014 and 2015 lacrosse seasons. Participants were instrumented with GForceTracker (GFT; boys) and X-Patch sensors (girls). Simultaneous game video was recorded by a trained videographer using a single camera located at the highest midfield location. One-third of the field was framed and panned to follow the ball during games. Videographic and accelerometer data were time synchronized. Head impact counts were compared with video recordings and were deemed valid if (1) the linear acceleration was ≥20 g, (2) the player was identified on the field, (3) the player was in camera view, and (4) the head impact mechanism could be clearly identified. Descriptive statistics of peak linear acceleration (PLA) and peak rotational velocity (PRV) for all verified head impacts ≥20 g were calculated. For the boys, a total recorded 1063 impacts (2014: n = 545; 2015: n = 518) were logged by the GFT between game start and end times (mean PLA, 46 ± 31 g; mean PRV, 1093 ± 661 deg/s) during 368 player-games. Of these impacts, 690 were verified via video analysis (65%; mean PLA, 48 ± 34 g; mean PRV, 1242 ± 617 deg/s). The X-Patch sensors, worn by the girls, recorded a total 180 impacts during the course of the games, and 58 (2014: n = 33; 2015: n = 25) were verified via video analysis (32%; mean PLA, 39 ± 21 g; mean PRV, 1664

  3. REFLECTANCE CALIBRATION SCHEME FOR AIRBORNE FRAME CAMERA IMAGES

    Directory of Open Access Journals (Sweden)

    U. Beisl

    2012-07-01

    Full Text Available The image quality of photogrammetric images is influenced by various effects from outside the camera. One effect is the scattered light from the atmosphere that lowers contrast in the images and creates a colour shift towards the blue. Another is the changing illumination during the day which results in changing image brightness within an image block. In addition, there is the so-called bidirectional reflectance of the ground (BRDF effects that is giving rise to a view and sun angle dependent brightness gradient in the image itself. To correct for the first two effects an atmospheric correction with reflectance calibration is chosen. The effects have been corrected successfully for ADS linescan sensor data by using a parametrization of the atmospheric quantities. Following Kaufman et al. the actual atmospheric condition is estimated by the brightness of a dark pixel taken from the image. The BRDF effects are corrected using a semi-empirical modelling of the brightness gradient. Both methods are now extended to frame cameras. Linescan sensors have a viewing geometry that is only dependent from the cross track view zenith angle. The difference for frame cameras now is to include the extra dimension of the view azimuth into the modelling. Since both the atmospheric correction and the BRDF correction require a model inversion with the help of image data, a different image sampling strategy is necessary which includes the azimuth angle dependence. For the atmospheric correction a sixth variable is added to the existing five variables visibility, view zenith angle, sun zenith angle, ground altitude, and flight altitude – thus multiplying the number of modelling input combinations for the offline-inversion. The parametrization has to reflect the view azimuth angle dependence. The BRDF model already contains the view azimuth dependence and is combined with a new sampling strategy.

  4. Fall detection in the elderly by head-tracking

    OpenAIRE

    Yu, Miao; Naqvi, Syed Mohsen; Chambers, Jonathan

    2009-01-01

    In the paper, we propose a fall detection method based on head tracking within a smart home environment equipped with video cameras. A motion history image and code-book background subtraction are combined to determine whether large movement occurs within the scene. Based on the magnitude of the movement information, particle filters with different state models are used to track the head. The head tracking procedure is performed in two video streams taken bytwoseparatecamerasandthree-dimension...

  5. View of the moving head of the gantry machine and the working area containingthe supply and assembly platforms (trays in green).

    CERN Multimedia

    Alan Honma

    1999-01-01

    The robotic assembly machine consists of the gantry positioning systemoutfitted with pickup tooling heads and camera+microscope for accurateposition measurements. The procedure is to place the components onthe working platforms and the machine applies glue, picks and placesthe silicon sensors and front-end hybrids onto the frames. The components are held in place by vacuum to prevent movement untilthe glue has cured. Up to four modules can be assembled at one time.The platforms are removable allowing assembly to continue on a newset of modules.

  6. Breast-specific gamma-imaging: molecular imaging of the breast using 99mTc-sestamibi and a small-field-of-view gamma-camera.

    Science.gov (United States)

    Jones, Elizabeth A; Phan, Trinh D; Blanchard, Deborah A; Miley, Abbe

    2009-12-01

    Breast-specific gamma-imaging (BSGI), also known as molecular breast imaging, is breast scintigraphy using a small-field-of-view gamma-camera and (99m)Tc-sestamibi. There are many different types of breast cancer, and many have characteristics making them challenging to detect by mammography and ultrasound. BSGI is a cost-effective, highly sensitive and specific technique that complements other imaging modalities currently being used to identify malignant lesions in the breast. Using the current Society of Nuclear Medicine guidelines for breast scintigraphy, Legacy Good Samaritan Hospital began conducting BSGI, breast scintigraphy with a breast-optimized gamma-camera. In our experience, optimal imaging has been conducted in the Breast Center by a nuclear medicine technologist. In addition, the breast radiologists read the BSGI images in correlation with the mammograms, ultrasounds, and other imaging studies performed. By modifying the current Society of Nuclear Medicine protocol to adapt it to the practice of breast scintigraphy with these new systems and by providing image interpretation in conjunction with the other breast imaging studies, our center has found BSGI to be a valuable adjunctive procedure in the diagnosis of breast cancer. The development of a small-field-of-view gamma-camera, designed to optimize breast imaging, has resulted in improved detection capabilities, particularly for lesions less than 1 cm. Our experience with this procedure has proven to aid in the clinical work-up of many of our breast patients. After reading this article, the reader should understand the history of breast scintigraphy, the pharmaceutical used, patient preparation and positioning, imaging protocol guidelines, clinical indications, and the role of breast scintigraphy in breast cancer diagnosis.

  7. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Directory of Open Access Journals (Sweden)

    Neil A Switz

    Full Text Available The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  8. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Science.gov (United States)

    Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  9. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    Science.gov (United States)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  10. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  11. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  12. Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View.

    Science.gov (United States)

    Bambach, Sven; Crandall, David J; Yu, Chen

    2015-11-01

    Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearer's everyday personal activities, which could be used for intelligent human-computer interfaces and other applications. We explore one possible application by investigating how egocentric video data collected from head-mounted cameras can be used to recognize social activities between two interacting partners (e.g. playing chess or cards). In particular, we demonstrate that just the positions and poses of hands within the first-person view are highly informative for activity recognition, and present a computer vision approach that detects hands to automatically estimate activities. While hand pose detection is imperfect, we show that combining evidence across first-person views from the two social partners significantly improves activity recognition accuracy. This result highlights how integrating weak but complimentary sources of evidence from social partners engaged in the same task can help to recognize the nature of their interaction.

  13. The findings of F-18 FDG camera-based coincidence PET in acute leukemia

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, S. N.; Joh, C. W.; Lee, M. H. [Ajou University School of Medicine, Suwon (Korea, Republic of)

    2002-07-01

    We evaluated the usefulness of F-18 FDG coincidence PET (CoDe-PET) using a dual-head gamma camera in the assessment of patients with acute leukemia. F-18 FDG CoDE-PET studies were performed in 5 patients with acute leukemia (6 ALL and 2 AML) before or after treatment. CoDe-PET was performed utilizing a dual-head gamma camera equipped with 5/8 inch NaI(Tl) crystal. Image acquisition began 60 minutes after the injection of F-18 FDG in the fasting state. A whole trunk from cervical to inguinal regions or selected region were scanned. No attenuation correction was made and image reconstruction was done using filtered back-projection. CoDe-PET studies were evaluated visually. F-18 FDG image performed in 5 patients with ALL before therapy depicted multiple lymph node involvement and diffuse increased uptake involving axial skeleton, pelvis and femurs. F-18 FDG image done in 2 AML after chemotherapy showed only diffuse increased uptake in sternum, ribs, spine, pelvis and proximal femur and these may be due to G-CSF stimulation effect in view of drug history. But bone marrow histology showed scattered blast cell suggesting incomplete remission in one and completer remission in another. F-18 image done in 1 ALL after therapy showed no abnormal uptake. CoDe-PET with F-18 FDG in acute lymphoblastic lymphoma showed multiple lymphnode and bone marrow involvement in whole body. Therefore we conclude that CoDe-PET with F-18 FDG usefulness for evaluation of extent in acute lymphoblastic leukemia. But there was a limitation to assess therapy effectiveness during therapy due to reactive bone marrow.

  14. [Evaluation of Iris Morphology Viewed through Stromal Edematous Corneas by Infrared Camera].

    Science.gov (United States)

    Kobayashi, Masaaki; Morishige, Naoyuki; Morita, Yukiko; Yamada, Naoyuki; Kobayashi, Motomi; Sonoda, Koh-Hei

    2016-02-01

    We reported that the application of infrared camera enables us to observe iris morphology in Peters' anomaly through edematous corneas. To observe the iris morphology in bullous keratopathy or failure grafts with an infrared camera. Eleven bullous keratopathy or failure grafts subjects (6 men and 5 women, mean age ± SD; 72.7 ± 13.0 years old) were enrolled in this study. The iris morphology was observed by applying visible light mode and near infrared light mode of infrared camera (MeibomPen). The detectability of pupil shapes, iris patterns and presence of iridectomy was evaluated. Infrared mode observation enabled us to detect the pupil shapes in 11 out of 11 cases, iris patterns in 3 out of 11 cases, and presence of iridetomy in 9 out of 11 cases although visible light mode observation could not detect any iris morphological changes. Applying infrared optics was valuable for observation of the iris morphology through stromal edematous corneas.

  15. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  16. An electronic pan/tilt/magnify and rotate camera system

    International Nuclear Information System (INIS)

    Zimmermann, S.; Martin, H.L.

    1992-01-01

    A new camera system has been developed for omnidirectional image-viewing applications that provides pan, tilt, magnify, and rotational orientation within a hemispherical field of view (FOV) without any moving parts. The imaging device is based on the fact that the image from a fish-eye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high-speed electronic circuitry. More specifically, an incoming fish-eye image from any image acquisition source is captured in the memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment. As a result, this device can accomplish the functions of pan, tilt, rotation, and magnification throughout a hemispherical FOV without the need for any mechanical devices. Multiple images, each with different image magnifications and pan-tilt-rotate parameters, can be obtained from a single camera

  17. Fuel handling system of nuclear reactor plants

    International Nuclear Information System (INIS)

    Faulstich, D.L.

    1991-01-01

    This patent describes a fuel handing system for nuclear reactor plants comprising a reactor vessel having an openable top and removable cover for refueling and containing therein, submerged in coolant water substantially filling the reactor vessel, a fuel core including a multiplicity of fuel bundles formed of groups of sealed tube elements enclosing fissionable fuel assembled into units. It comprises a fuel bundle handing platform moveable over the open top of the reactor vessel; a fuel bundle handing mast extendable downward from the platform with a lower end projecting into the open top reactor vessel to the fuel core submerged in water; a grapple head mounted on the lower end of the mast provided with grappling hook means for attaching to and transporting fuel bundles into and out from the fuel core; and a camera with a prismatic viewing head surrounded by a radioactive resisting quartz cylinder and enclosed within the grapple head which is provided with at least three windows with at least two windows provided with an angled surface for aiming the camera prismatic viewing head in different directions and thereby viewing the fuel bundles of the fuel core from different perspectives, and having a cable connecting the camera with a viewing monitor located above the reactor vessel for observing the fuel bundles of the fuel core and for enabling aiming of the camera prismatic viewing head through the windows by an operator

  18. Self-luminous event photography with the Marco M-4 image converter camera system

    International Nuclear Information System (INIS)

    Meyer, T.O.

    1980-02-01

    The camera system is shown to be applicable to selfluminous events, such as the flasher-gap-enhanced shock waves which are depicted. Successive photographs of the detonation wavefront progressing across disc of high explosive (PBX 9404) are used for the determination of detonation velocity. Time intervals between film exposures for the four individual camera heads are easily measured and may extend from a few nanoseconds to ten milliseconds

  19. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... When the image slices are reassembled by computer software, the result is a very detailed multidimensional view ... Safety Images related to Computed Tomography (CT) - Head Videos related to Computed Tomography (CT) - Head Sponsored by ...

  20. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  1. Improving head and body pose estimation through semi-supervised manifold alignment

    KAUST Repository

    Heili, Alexandre

    2014-10-27

    In this paper, we explore the use of a semi-supervised manifold alignment method for domain adaptation in the context of human body and head pose estimation in videos. We build upon an existing state-of-the-art system that leverages on external labelled datasets for the body and head features, and on the unlabelled test data with weak velocity labels to do a coupled estimation of the body and head pose. While this previous approach showed promising results, the learning of the underlying manifold structure of the features in the train and target data and the need to align them were not explored despite the fact that the pose features between two datasets may vary according to the scene, e.g. due to different camera point of view or perspective. In this paper, we propose to use a semi-supervised manifold alignment method to bring the train and target samples closer within the resulting embedded space. To this end, we consider an adaptation set from the target data and rely on (weak) labels, given for example by the velocity direction whenever they are reliable. These labels, along with the training labels are used to bias the manifold distance within each manifold and to establish correspondences for alignment.

  2. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  3. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  4. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  5. Digital quality control of the camera computer interface

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1983-01-01

    A brief description is given of how the gamma camera-computer interface works and what kind of errors can occur. Quality control tests of the interface are then described which include 1) tests of static performance e.g. uniformity, linearity, 2) tests of dynamic performance e.g. basic timing, interface count-rate, system count-rate, 3) tests of special functions e.g. gated acquisition, 4) tests of the gamma camera head, and 5) tests of the computer software. The tests described are mainly acceptance and routine tests. Many of the tests discussed are those recommended by an IAEA Advisory Group for inclusion in the IAEA control schedules for nuclear medicine instrumentation. (U.K.)

  6. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  7. Reconstruction of data for an experiment using multi-gap spark chambers with six-camera optics

    International Nuclear Information System (INIS)

    Maybury, R.; Daley, H.M.

    1983-06-01

    A program has been developed to reconstruct spark positions in a pair of multi-gap optical spark chambers viewed by six cameras, which were used by a Rutherford Laboratory experiment. The procedure for correlating camera views to calculate spark positions is described. Calibration of the apparatus, and the application of time- and intensity-dependent corrections are discussed. (author)

  8. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  9. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  10. Intelligent viewing control for robotic and automation systems

    Science.gov (United States)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  11. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  12. Physical assessment of the GE/CGR Neurocam and comparison with a single rotating gamma-camera

    International Nuclear Information System (INIS)

    Kouris, K.; Jarritt, P.H.; Costa, D.C.; Ell, P.J.

    1992-01-01

    The GE/CGR Neurocam is a triple-headed single photon emission tomography (SPET) system dedicated to multi-slice brain tomography. We have assessed its physical performance in terms of sensitivity and resolution, and its clinical efficacy in comparison with a modern, single, rotating gamma-camera (GE 400XCT). Using a water-filled cylinder containing TC-99m, the tomographic volume sensitivity of the Neurocam was 30.0 and 50.7 kcps/MBq.ml.cm for the high-resolution and general-purpose collimators, respectively; the corresponding values for the single rotating camera were 7.6 and 12.8 kcps/MBq.ml.cm. Tomographic resolution was measured in air and in water. In air, the Neurocam resolution at the centre of the field-of-view is 9.0 and 10.7 mm full width at half-maximum (FWHM) with the collimators, respectively, and is isotropic in the three orthogonal planes; the resolution of the GE 400XCT with its 13-cm radius of rotation is 10.3 and 11.7 mm, respectively. For the Neurocam with the HR collimator, the transaxial FWHM values in water were 9.7 mm at the centre and 9.5 mm radial (6.6 mm tangential) at 8 cm from the centre. The physical characteristics of the Neurocam enable the routine acquisition of brain perfusion data with Tc-99m hexamethyl-propylene amine oxime in about 14 min, yielding better image quality than with a single rotating camera in 40 min. (orig./HP)

  13. Miniature CCD X-Ray Imaging Camera Technology Final Report CRADA No. TC-773-94

    Energy Technology Data Exchange (ETDEWEB)

    Conder, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mummolo, F. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-10-19

    The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.

  14. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  15. ROBUST PERSON TRACKING WITH MULTIPLE NON-OVERLAPPING CAMERAS IN AN OUTDOOR ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    S. Hellwig

    2012-07-01

    Full Text Available The aim of our work is to combine multiple cameras for a robust tracking of persons in an outdoor environment. Although surveillance is a well established field, many algorithms apply various constraints like overlapping fields of view or precise calibration of the cameras to improve results. An application of these developed systems in a realistic outdoor environment is often difficult. Our aim is to be widely independent from the camera setup and the observed scene, in order to use existing cameras. Thereby our algorithm needs to be capable to work with both overlapping and non-overlapping fields of views. We propose an algorithm that allows flexible combination of different static cameras with varying properties. Another requirement of a practical application is that the algorithm is able to work online. Our system is able to process the data during runtime and to provide results immediately. In addition to seeking flexibility in the camera setup, we present a specific approach that combines state of the art algorithms in order to be robust to environment influences. We present results that indicate a good performance of our introduced algorithm in different scenarios. We show its robustness to different types of image artifacts. In addition we demonstrate that our algorithm is able to match persons between cameras in a non-overlapping scenario.

  16. User-assisted visual search and tracking across distributed multi-camera networks

    Science.gov (United States)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  17. Radiation-resistant optical sensors and cameras; Strahlungsresistente optische Sensoren und Kameras

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, G. [Imaging and Sensing Technology, Bonn (Germany)

    2008-02-15

    Introducing video technology, i.e. 'TV', specifically in the nuclear field was considered at an early stage. Possibilities to view spaces in nuclear facilities by means of radiation-resistant optical sensors or cameras are presented. These systems are to enable operators to monitor and control visually the processes occurring within such spaces. Camera systems are used, e.g., for remote surveillance of critical components in nuclear power plants and nuclear facilities, and thus contribute also to plant safety. A different application of optical systems resistant to radiation is in the visual inspection of, e.g., reactor pressure vessels and in tracing small parts inside a reactor. Camera systems are also employed in remote disassembly of radioactively contaminated old plants. Unfortunately, the niche market of radiation-resistant camera systems hardly gives rise to the expectation of research funds becoming available for the development of new radiation-resistant optical systems for picture taking and viewing. Current efforts are devoted mainly to improvements of image evaluation and image quality. Other items on the agendas of manufacturers are the reduction in camera size, which is limited by the size of picture tubes, and the increased use of commercial CCD cameras together with adequate shieldings or improved lenses. Consideration is also being given to the use of periphery equipment and to data transmission by LAN, WAN, or Internet links to remote locations. (orig.)

  18. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Head Computed tomography (CT) of the head uses special x-ray equipment ... story here Images × Image Gallery Patient undergoing computed tomography (CT) scan. View full size with caption Pediatric Content ...

  19. Lensless imaging for wide field of view

    Science.gov (United States)

    Nagahara, Hajime; Yagi, Yasushi

    2015-02-01

    It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.

  20. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  1. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  2. Performance tests of two portable mini gamma cameras for medical applications

    International Nuclear Information System (INIS)

    Sanchez, F.; Fernandez, M. M.; Gimenez, M.; Benlloch, J. M.; Rodriguez-Alvarez, M. J.; Garcia de Quiros, F.; Lerche, Ch. W.; Pavon, N.; Palazon, J. A.; Martinez, J.; Sebastia, A.

    2006-01-01

    We have developed two prototypes of portable gamma cameras for medical applications based on a previous prototype designed and tested by our group. These cameras use a CsI(Na) continuous scintillation crystal coupled to the new flat-panel-type multianode position-sensitive photomultiplier tube, H8500 from Hamamatsu Photonics. One of the prototypes, mainly intended for intrasurgical use, has a field of view of 44x44 mm 2 , and weighs 1.2 kg. Its intrinsic resolution is better than 1.5 mm and its energy resolution is about 13% at 140 keV. The second prototype, mainly intended for osteological, renal, mammary, and endocrine (thyroid, parathyroid, and suprarenal) scintigraphies, weighs a total of 2 kg. Its average spatial resolution is 2 mm; it has a field of view of 95x95 mm 2 , with an energy resolution of about 15% at 140 keV. The main advantages of these gamma camera prototypes with respect to those previously reported in the literature are high portability and low weight, with no significant loss of sensitivity and spatial resolution. All the electronic components are packed inside the mini gamma cameras, and no external electronic devices are required. The cameras are only connected through the universal serial bus port to a portable PC. In this paper, we present the design of the cameras and describe the procedures that have led us to choose their configuration together with the most important performance features of the cameras. For one of the prototypes, clinical tests on melanoma patients are presented and images are compared with those obtained with a conventional camera

  3. Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras

    Science.gov (United States)

    Amer, Tahani R.; Goad, William K.

    2005-01-01

    Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.

  4. Single-frame 3D human pose recovery from multiple views

    NARCIS (Netherlands)

    Hofmann, M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body pose from multi-camera single-frame views. Pose recovery starts with a shape detection stage where candidate poses are generated based on hierarchical exemplar matching in the individual camera views. The hierarchy used in

  5. Dual-head gamma camera system for intraoperative localization of radioactive seeds

    International Nuclear Information System (INIS)

    Arsenali, B; Viergever, M A; Gilhuijs, K G A; De Jong, H W A M; Beijst, C; Dickerscheid, D B M

    2015-01-01

    Breast-conserving surgery is a standard option for the treatment of patients with early-stage breast cancer. This form of surgery may result in incomplete excision of the tumor. Iodine-125 labeled titanium seeds are currently used in clinical practice to reduce the number of incomplete excisions. It seems likely that the number of incomplete excisions can be reduced even further if intraoperative information about the location of the radioactive seed is combined with preoperative information about the extent of the tumor. This can be combined if the location of the radioactive seed is established in a world coordinate system that can be linked to the (preoperative) image coordinate system. With this in mind, we propose a radioactive seed localization system which is composed of two static ceiling-suspended gamma camera heads and two parallel-hole collimators. Physical experiments and computer simulations which mimic realistic clinical situations were performed to estimate the localization accuracy (defined as trueness and precision) of the proposed system with respect to collimator-source distance (ranging between 50 cm and 100 cm) and imaging time (ranging between 1 s and 10 s). The goal of the study was to determine whether or not a trueness of 5 mm can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (these specifications were defined by a group of dedicated breast cancer surgeons). The results from the experiments indicate that the location of the radioactive seed can be established with an accuracy of 1.6 mm  ±  0.6 mm if a collimator-source distance of 50 cm and imaging time of 5 s are used (these experiments were performed with a 4.5 cm thick block phantom). Furthermore, the results from the simulations indicate that a trueness of 3.2 mm or less can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (this trueness was achieved for all 14 breast phantoms which

  6. Goniometer head

    International Nuclear Information System (INIS)

    Dzhazairov-Kakhramanov, V.; Berger, V.D.; Kadyrzhanov, K.K.; Zarifov, R.A.

    1994-01-01

    The goniometer head is an electromechanical instrument that performs the independent transfer of a testing sample on three coordinate axes (X, Y, Z) within limits of ±8 mm and independent rotation relative of these directions. The instrument comprises a sample holder, bellows component and three electrometer drives. The sample holder rotates around the axes X and Y, and is installed on the central arm which rotates around axis Z. One characteristic of this instrument is its independence which allows its use in any camera for researches in the field of radiation physics. 2 figs

  7. An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring

    Directory of Open Access Journals (Sweden)

    Yifan Zhao

    2017-11-01

    Full Text Available Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers’ behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone.

  8. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  9. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    Science.gov (United States)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  10. Characterization of a direct detection device imaging camera for transmission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Milazzo, Anna-Clare, E-mail: amilazzo@ncmir.ucsd.edu [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Moldovan, Grigore [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Lanman, Jason [Department of Molecular Biology, The Scripps Research Institute, La Jolla, CA 92037 (United States); Jin, Liang; Bouwer, James C. [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Klienfelder, Stuart [University of California at Irvine, Irvine, CA 92697 (United States); Peltier, Steven T.; Ellisman, Mark H. [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Kirkland, Angus I. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Xuong, Nguyen-Huu [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States)

    2010-06-15

    The complete characterization of a novel direct detection device (DDD) camera for transmission electron microscopy is reported, for the first time at primary electron energies of 120 and 200 keV. Unlike a standard charge coupled device (CCD) camera, this device does not require a scintillator. The DDD transfers signal up to 65 lines/mm providing the basis for a high-performance platform for a new generation of wide field-of-view high-resolution cameras. An image of a thin section of virus particles is presented to illustrate the substantially improved performance of this sensor over current indirectly coupled CCD cameras.

  11. Characterization of a direct detection device imaging camera for transmission electron microscopy

    International Nuclear Information System (INIS)

    Milazzo, Anna-Clare; Moldovan, Grigore; Lanman, Jason; Jin, Liang; Bouwer, James C.; Klienfelder, Stuart; Peltier, Steven T.; Ellisman, Mark H.; Kirkland, Angus I.; Xuong, Nguyen-Huu

    2010-01-01

    The complete characterization of a novel direct detection device (DDD) camera for transmission electron microscopy is reported, for the first time at primary electron energies of 120 and 200 keV. Unlike a standard charge coupled device (CCD) camera, this device does not require a scintillator. The DDD transfers signal up to 65 lines/mm providing the basis for a high-performance platform for a new generation of wide field-of-view high-resolution cameras. An image of a thin section of virus particles is presented to illustrate the substantially improved performance of this sensor over current indirectly coupled CCD cameras.

  12. A wide field X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.; Turner, M.J.L.; Willingale, R.

    1980-01-01

    A wide field of view X-ray camera based on the Dicke or Coded Mask principle is described. It is shown that this type of instrument is more sensitive than a pin-hole camera, or than a scanning survey of a given region of sky for all wide field conditions. The design of a practical camera is discussed and the sensitivity and performance of the chosen design are evaluated by means of computer simulations. The Wiener Filter and Maximum Entropy methods of deconvolution are described and these methods are compared with each other and cross-correlation using data from the computer simulations. It is shown that the analytic expressions for sensitivity used by other workers are confirmed by the simulations, and that ghost images caused by incomplete coding can be substantially eliminated by the use of the Wiener Filter and the Maximum Entropy Method, with some penalty in computer time for the latter. The cyclic mask configuration is compared with the simple mask camera. It is shown that when the diffuse X-ray background dominates, the simple system is more sensitive and has the better angular resolution. When sources dominate the simple system is less sensitive. It is concluded that the simple coded mask camera is the best instrument for wide field imaging of the X-ray sky. (orig.)

  13. Camera systems for crash and hyge testing

    Science.gov (United States)

    Schreppers, Frederik

    1995-05-01

    Since the beginning of the use of high speed cameras for crash and hyge- testing substantial changements have taken place. Both the high speed cameras and the electronic control equipment are more sophisticated nowadays. With regard to high speed equipment, a short historical retrospective view will show that concerning high speed cameras, the improvements are mainly concentrated in design details, where as the electronic control equipment has taken full advantage of the rapid progress in electronic and computer technology in the course of the last decades. Nowadays many companies and institutes involved in crash and hyge-testing wish to perform this testing, as far as possible, as an automatic computer controlled routine in order to maintain and improve security and quality. By means of several in practice realize solutions, it will be shown how their requirements could be met.

  14. SPECT and 123I-Iodolisuride (123-I-ILIS) in extra-pyramidal syndromes. The use of different models of γ-cameras

    International Nuclear Information System (INIS)

    Ribeiro, M.J.; Jannuario, C.; Santos, A.C.; Cunha, L.; Pedroso de Lima, J.J.; Prunier-Levilion, C.; Autret, A.; Guilloteau, D.; Besnard, J.C.; Baulieu, J.L.; Chassat, F.; Bekhechi, D.; Marchand, J.; Mauclaire, L.; Catela, L.

    1999-01-01

    The aim of this work was to evaluate 123 I-ILIS as a radioligand of dopamine receptors in patients with extra-pyramidal diseases by using different cameras in two different centers. 45 patients were included and divided in 2 groups: group I (n=28): idiopathic Parkinson disease, group II (n=17): other extra-pyramidal syndrome. 123 I-ILIS, 1.7 to 2.8 MBq/kg, was injected after informed consent. Imaging was performed with a single head camera, a dual head camera, a triple head camera and a brain dedicated annular detector. The pattern of the transverse slices containing the basal ganglia was classified according to 3 types: type 1: visible basal ganglia and invisible cortex, type 2: invisible basal ganglia and visible cortex, type 3: visible basal ganglia and cortex. Striatal/frontal cortex ratio (S/FC) was calculated from standardized, geometrical ROI's. No patient showed any undesirable effect. All SPECT images were interpretable. In group 1, 45/45 scintigraphic pattern were type 1 or 3, in group II 18/23 scintigraphic patterns were type 2 or 3. S/FC was significantly lower in group II than in group I patients. We conclude that 123 I-ILIS SPECT can be performed with any conventional γ-camera. It provides functional informations about the striatal dopaminergic synapse in patients with extra-pyramidal degenerative disease, and could be useful in the differential diagnosis between Parkinson disease and other extra-pyramidal syndromes. (author)

  15. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  16. 3D medical collaboration technology to enhance emergency healthcare

    DEFF Research Database (Denmark)

    Welch, Gregory F; Sonnenwald, Diane H.; Fuchs, Henry

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address...... these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays...... or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing...

  17. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  18. Initial experience with a prototype dual-crystal (LSO/NaI) dual-head coincidence camera in oncology

    International Nuclear Information System (INIS)

    Joshi, Urvi; Boellaard, Ronald; Comans, Emile F.I.; Raijmakers, Pieter G.H.M.; Pijpers, Rik J.; Teule, Gerrit J.J.; Lingen, Arthur van; Hoekstra, Otto S.; Miller, Steven D.

    2004-01-01

    The aim of this study was to evaluate the in vivo performance of a prototype dual-crystal [lutetium oxyorthosilicate (LSO)/sodium iodide (NaI)] dual-head coincidence camera (DHC) for PET and SPET (LSO-PS), in comparison to BGO-PET with fluorine-18 fluorodeoxyglucose (FDG) in oncology. This follows earlier reports that LSO-PS has noise-equivalent counting (NEC) rates comparable to partial ring BGO-PET, i.e. clearly higher than standard NaI DHCs. Twenty-four randomly selected oncological patients referred for whole-body FDG-PET underwent BGO-PET followed by LSO-PS. Four nuclear medicine physicians were randomised to read a single scan modality, in terms of lesion intensity, location and likelihood of malignancy. BGO-PET was considered the gold standard. Forty-eight lesions were classified as positive with BGO-PET, of which LSO-PS identified 73% (95% CI 60-86%). There was good observer agreement for both modalities in terms of intensity, location and interpretation. Lesions were missed by LSO-PS in 13 patients in the chest (n=6), neck (n=3) and abdomen (n=4). The diameter of these lesions was estimated to be 0.5-1 cm. Initial results justify further evaluation of LSO-PS in specific clinical situations, especially if a role as an instrument of triage for PET is foreseen. (orig.)

  19. Multi-view video segmentation and tracking for video surveillance

    Science.gov (United States)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  20. A new high-speed IR camera system

    Science.gov (United States)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  1. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  2. A detailed comparison of single-camera light-field PIV and tomographic PIV

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  3. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  4. Optical fiber head for providing lateral viewing

    Science.gov (United States)

    Everett, Matthew J.; Colston, Billy W.; James, Dale L.; Brown, Steve; Da Silva, Luiz

    2002-01-01

    The head of an optical fiber comprising the sensing probe of an optical heterodyne sensing device includes a planar surface that intersects the perpendicular to axial centerline of the fiber at a polishing angle .theta.. The planar surface is coated with a reflective material so that light traveling axially through the fiber is reflected transverse to the fiber's axial centerline, and is emitted laterally through the side of the fiber. Alternatively, the planar surface can be left uncoated. The polishing angle .theta. must be no greater than 39.degree. or must be at least 51.degree.. The emitted light is reflected from adjacent biological tissue, collected by the head, and then processed to provide real-time images of the tissue. The method for forming the planar surface includes shearing the end of the optical fiber and applying the reflective material before removing the buffer that circumscribes the cladding and the core.

  5. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  6. A television/still camera with common optical system for reactor inspection

    International Nuclear Information System (INIS)

    Hughes, G.; McBane, P.

    1976-01-01

    One of the problems of reactor inspection is to obtain permanent high quality records. Video recordings provide a record of poor quality but known content. Still cameras can be used but the frame content is not predictable. Efforts have been made to combine T.V. viewing to align a still camera but a simple combination does not provide the same frame size. The necessity to preset the still camera controls severely restricts the flexibility of operation. A camera has, therefore, been designed which allows a search operation using the T.V. system. When an anomaly is found the still camera controls can be remotely set, an exact record obtained and the search operation continued without removal from the reactor. An application of this camera in the environment of the blanket gas region above the sodium region in PFR at 150 0 C is described

  7. Computing Installation Parameters Of CCTV Cameras for Traffic Surveillance

    OpenAIRE

    Pratishtha Gupta; G. N. Purohit

    2013-01-01

    For properly installing CCTV cameras on any intersection point for traffic surveillance, some parametersneed to be determined in order to get maximum benefit. The height, angle of placement of the CCTVcamera is used to determine the view or the area that the camera will cover with proper resolution. Theresolution should not be too high to cover less traffic and should not be too low to cover large but hardlydistinguishable traffic.This paper concerns with computation of the required CCTV inst...

  8. Cardiac and Respiratory Parameter Estimation Using Head-mounted Motion-sensitive Sensors

    Directory of Open Access Journals (Sweden)

    J. Hernandez

    2015-05-01

    Full Text Available This work explores the feasibility of using motion-sensitive sensors embedded in Google Glass, a head-mounted wearable device, to robustly measure physiological signals of the wearer. In particular, we develop new methods to use Glass’s accelerometer, gyroscope, and camera to extract pulse and respiratory waves of 12 participants during a controlled experiment. We show it is possible to achieve a mean absolute error of 0.82 beats per minute (STD: 1.98 for heart rate and 0.6 breaths per minute (STD: 1.19 for respiration rate when considering different observation windows and combinations of sensors. Moreover, we show that a head-mounted gyroscope sensor shows improved performance versus more commonly explored sensors such as accelerometers and demonstrate that a head-mounted camera is a novel and promising method to capture the physiological responses of the wearer. These findings included testing across sitting, supine, and standing postures before and after physical exercise.

  9. The influence of distrubing effects on the performance of a wide field coded mask X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.R.; Turner, M.J.L.; Willingale, R.

    1985-01-01

    The coded aperture telescope, or Dicke camera, is seen as an instrument suitable for many applications in X-ray and gamma ray imaging. In this paper the effects of a partially obscuring window mask support or collimator, a detector with limited spatial resolution, and motion of the camera during image integration are considered using a computer simulation of the performance of such a camera. Cross correlation and the Wiener filter are used to deconvolve the data. It is shown that while these effects cause a degradation in performance this is in no case catastrophic. Deterioration of the image is shown to be greatest where strong sources are present in the field of view and is quite small (proportional 10%) when diffuse background is the major element. A comparison between the cyclic mask camera and the single mask camera is made under various conditions and it is shown the single mask camera has a moderate advantage particularly when imaging a wide field of view. (orig.)

  10. A holographic color camera for recording artifacts

    International Nuclear Information System (INIS)

    Jith, Abhay

    2013-01-01

    Advent of 3D televisions has created a new wave of public interest in images with depth. Though these technologies create moving pictures with apparent depth, it lacks the visual appeal and a set of other positive aspects of color holographic images. The above new wave of interest in 3D will definitely help to fuel popularity of holograms. In view of this, a low cost and handy color holography camera is designed for recording color holograms of artifacts. It is believed that such cameras will help to record medium format color holograms outside conventional holography laboratories and to popularize color holography. The paper discusses the design and the results obtained.

  11. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  12. Efficient view based 3-D object retrieval using Hidden Markov Model

    Science.gov (United States)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  13. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  14. Heads Up: Concussion in Youth Sports

    Medline Plus

    Full Text Available ... with HEADS UP & CDC's Injury Center HEADS UP Resources File Formats Help: How do I view different ... 6348 Email CDC-INFO U.S. Department of Health & Human Services HHS/Open USA.gov Top

  15. Design and tests of a portable mini gamma camera

    International Nuclear Information System (INIS)

    Sanchez, F.; Benlloch, J.M.; Escat, B.; Pavon, N.; Porras, E.; Kadi-Hanifi, D.; Ruiz, J.A.; Mora, F.J.; Sebastia, A.

    2004-01-01

    Design optimization, manufacturing, and tests, both laboratory and clinical, of a portable gamma camera for medical applications are presented. This camera, based on a continuous scintillation crystal and a position-sensitive photomultiplier tube, has an intrinsic spatial resolution of ≅2 mm, an energy resolution of 13% at 140 keV, and linearities of 0.28 mm (absolute) and 0.15 mm (differential), with a useful field of view of 4.6 cm diameter. Our camera can image small organs with high efficiency and so it can address the demand for devices of specific clinical applications like thyroid and sentinel node scintigraphy as well as scintimammography and radio-guided surgery. The main advantages of the gamma camera with respect to those previously reported in the literature are high portability, low cost, and weight (2 kg), with no significant loss of sensitivity and spatial resolution. All the electronic components are packed inside the minigamma camera, and no external electronic devices are required. The camera is only connected through the universal serial bus port to a portable personal computer (PC), where a specific software allows to control both the camera parameters and the measuring process, by displaying on the PC the acquired image on 'real time'. In this article, we present the camera and describe the procedures that have led us to choose its configuration. Laboratory and clinical tests are presented together with diagnostic capabilities of the gamma camera

  16. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  17. The first GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    De Franco, A.; Allan, D.; Armstrong, T.; Ashton, T.; Balzer, A.; Berge, D.; Bose, R.; Brown, A.M.; Buckley, J.; Chadwick, P.M.; Cooke, P.; Cotter, G.; Daniel, M.K.; Funk, S.; Greenshaw, T.; Hinton, J.; Kraus, M.; Lapington, J.; Molyneux, P.; Moore, P.; Nolan, S.; Okumura, A.; Ross, D.; Rulten, C.; Schmoll, J.; Schoorlemmer, H.; Stephan, M.; Sutcliffe, P.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Varner, G.; Watson, J.; Zink, A.

    2015-01-01

    The Gamma Cherenkov Telescope (GCT) is proposed to be part of the Small Size Telescope (SST) array of the Cherenkov Telescope Array (CTA). The GCT dual-mirror optical design allows the use of a compact camera of diameter roughly 0.4 m. The curved focal plane is equipped with 2048 pixels of ~0.2{\\deg} angular size, resulting in a field of view of ~9{\\deg}. The GCT camera is designed to record the flashes of Cherenkov light from electromagnetic cascades, which last only a few tens of nanoseconds. Modules based on custom ASICs provide the required fast electronics, facilitating sampling and digitisation as well as first level of triggering. The first GCT camera prototype is currently being commissioned in the UK. On-telescope tests are planned later this year. Here we give a detailed description of the camera prototype and present recent progress with testing and commissioning.

  18. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  19. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  20. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  1. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  2. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION (extended)

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2014-01-01

    on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust...

  3. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  4. Integrated multi sensors and camera video sequence application for performance monitoring in archery

    Science.gov (United States)

    Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali

    2018-03-01

    This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.

  5. Heading perception in patients with advanced retinitis pigmentosa

    Science.gov (United States)

    Li, Li; Peli, Eli; Warren, William H.

    2002-01-01

    PURPOSE: We investigated whether retinis pigmentosa (RP) patients with residual visual field of < 100 degrees could perceive heading from optic flow. METHODS: Four RP patients and four age-matched normally sighted control subjects viewed displays simulating an observer walking over a ground. In experiment 1, subjects viewed either the entire display with free fixation (full-field condition) or through an aperture with a fixation point at the center (aperture condition). In experiment 2, patients viewed displays of different durations. RESULTS: RP patients' performance was comparable to that of the age-matched control subjects: heading judgment was better in the full-field condition than in the aperture condition. Increasing display duration from 0.5 s to 1 s improved patients' heading performance, but giving them more time (3 s) to gather more visual information did not consistently further improve their performance. CONCLUSIONS: RP patients use active scanning eye movements to compensate for their visual field loss in heading perception; they might be able to gather sufficient optic flow information for heading perception in about 1 s.

  6. Determination of appropriate exposure angles for the reverse water's view using a head phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Min Su; Lee, Keun Ohk [Dept. of Radiology, Soonchunhyang University Hospital, Bucheon (Korea, Republic of); Choi, Jae Ho [Dept. of Radiological Technology, Ansan University, Ansan (Korea, Republic of); Jung, Jae Hong [Dept. of Biomedical Engineering, College of Medicine, The Catholic University, Seoul (Korea, Republic of)

    2017-06-15

    Early diagnosis for upper facial trauma is difficult by using the standard Water's view (S-Water's) in general radiograph due to overlapping of anatomical structures, the uncertainty of patient positioning, and specific patients with obese, pediatric, old, or high-risk. The purpose of this study was to analyze appropriate exposure angles through a comparison of two different protocols (S-Water's vs. reverse Water's view (R-Water's)) by using a head phantom. A head phantom and general radiograph with 75 kVp, 400 mA, 45 ms 18 mAs, and SID 100 cm. Images of R-Water's were obtained by different angles in the range of 0 degree to 50 degrees, which adjusted an angle at 1 degree interval in supine position. Survey elements were developed and three observers were evaluated with four elements including the maxillary sinus, zygomatic arch, petrous ridge, and image distortion. Statistical significant analysis were used the Krippendorff's alpha and Fleiss' kappa. The intra-class correlation (ICC) coefficient for three observers were high with maxillary, 0.957 (0.903, 0.995); zygomatic arch, 0.939 (0.866, 0.987); petrous ridge, 0.972 (0.897, 1.000); and image distortion, 0.949 (0.830, 1.000). The high-quality image (HI) and perfect agreement (PA) for acquired exposure angles were high in range of the maxillary sinus (36 degrees – 44 degrees), zygomatic arch (33 degrees – 40 degrees), petrous ridge (32 degrees – 50 degrees), and image distortion (44 degrees– 50 degrees). Consequently, an appropriate exposure angles for the R-Water's view in the supine position for patients with facial trauma are in the from 36 degrees to 40 degrees in this phantom study. The results of this study will be helpful for the rapid diagnosis of facial fractures by simple radiography.

  7. Determination of appropriate exposure angles for the reverse water's view using a head phantom

    International Nuclear Information System (INIS)

    Lee, Min Su; Lee, Keun Ohk; Choi, Jae Ho; Jung, Jae Hong

    2017-01-01

    Early diagnosis for upper facial trauma is difficult by using the standard Water's view (S-Water's) in general radiograph due to overlapping of anatomical structures, the uncertainty of patient positioning, and specific patients with obese, pediatric, old, or high-risk. The purpose of this study was to analyze appropriate exposure angles through a comparison of two different protocols (S-Water's vs. reverse Water's view (R-Water's)) by using a head phantom. A head phantom and general radiograph with 75 kVp, 400 mA, 45 ms 18 mAs, and SID 100 cm. Images of R-Water's were obtained by different angles in the range of 0 degree to 50 degrees, which adjusted an angle at 1 degree interval in supine position. Survey elements were developed and three observers were evaluated with four elements including the maxillary sinus, zygomatic arch, petrous ridge, and image distortion. Statistical significant analysis were used the Krippendorff's alpha and Fleiss' kappa. The intra-class correlation (ICC) coefficient for three observers were high with maxillary, 0.957 (0.903, 0.995); zygomatic arch, 0.939 (0.866, 0.987); petrous ridge, 0.972 (0.897, 1.000); and image distortion, 0.949 (0.830, 1.000). The high-quality image (HI) and perfect agreement (PA) for acquired exposure angles were high in range of the maxillary sinus (36 degrees – 44 degrees), zygomatic arch (33 degrees – 40 degrees), petrous ridge (32 degrees – 50 degrees), and image distortion (44 degrees– 50 degrees). Consequently, an appropriate exposure angles for the R-Water's view in the supine position for patients with facial trauma are in the from 36 degrees to 40 degrees in this phantom study. The results of this study will be helpful for the rapid diagnosis of facial fractures by simple radiography

  8. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  9. Design studies of a depth encoding large aperture PET camera

    International Nuclear Information System (INIS)

    Moisan, C.; Rogers, J.G.; Buckley, K.R.; Ruth, T.J.; Stazyk, M.W.; Tsang, G.

    1994-10-01

    The feasibility of a wholebody PET tomograph with the capacity to correct for the parallax error induced by the Depth-Of-Interaction of γ-rays is assessed through simulation. The experimental energy, depth, and transverse position resolutions of BGO block detector candidates are the main inputs to a simulation that predicts the point source resolution of the Depth Encoding Large Aperture Camera (DELAC). The results indicate that a measured depth resolution of 7 mm (FWHM) is sufficient to correct a substantial part of the parallax error for a point source at the edge of the Field-Of-View. A search for the block specifications and camera ring radius that would optimize the spatial resolution and its uniformity across the Field-Of-View is also presented. (author). 10 refs., 1 tab., 5 figs

  10. Design and Construction of an X-ray Lightning Camera

    Science.gov (United States)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  11. Fixed-head star tracker magnitude calibration on the solar maximum mission

    Science.gov (United States)

    Pitone, Daniel S.; Twambly, B. J.; Eudell, A. H.; Roberts, D. A.

    1990-01-01

    The sensitivity of the fixed-head star trackers (FHSTs) on the Solar Maximum Mission (SMM) is defined as the accuracy of the electronic response to the magnitude of a star in the sensor field-of-view, which is measured as intensity in volts. To identify stars during attitude determination and control processes, a transformation equation is required to convert from star intensity in volts to units of magnitude and vice versa. To maintain high accuracy standards, this transformation is calibrated frequently. A sensitivity index is defined as the observed intensity in volts divided by the predicted intensity in volts; thus, the sensitivity index is a measure of the accuracy of the calibration. Using the sensitivity index, analysis is presented that compares the strengths and weaknesses of two possible transformation equations. The effect on the transformation equations of variables, such as position in the sensor field-of-view, star color, and star magnitude, is investigated. In addition, results are given that evaluate the aging process of each sensor. The results in this work can be used by future missions as an aid to employing data from star cameras as effectively as possible.

  12. Breath-hold monitoring and visual feedback for radiotherapy using a charge-coupled device camera and a head-mounted display. System development and feasibility

    International Nuclear Information System (INIS)

    Yoshitake, Tadamasa; Nakamura, Katsumasa; Shioyama, Yoshiyuki

    2008-01-01

    The aim of this study was to present the technical aspects of the breath-hold technique with respiratory monitoring and visual feedback and to evaluate the feasibility of this system in healthy volunteers. To monitor respiration, the vertical position of the fiducial marker placed on the patient's abdomen was tracked by a machine vision system with a charge-coupled device camera. A monocular head-mounted display was used to provide the patient with visual feedback about the breathing trace. Five healthy male volunteers were enrolled in this study. They held their breath at the end-inspiration and the end-expiration phases. They performed five repetitions of the same type of 15-s breath-holds with and without a head-mounted display, respectively. A standard deviation of five mean positions of the fiducial marker during a 15-s breath-hold in each breath-hold type was used as the reproducibility value of breath-hold. All five volunteers well tolerated the breath-hold maneuver. For the inspiration breath-hold, the standard deviations with and without visual feedback were 1.74 mm and 0.84 mm, respectively (P=0.20). For the expiration breath-hold, the standard deviations with and without visual feedback were 0.63 mm and 0.96 mm, respectively (P=0.025). Our newly developed system might help the patient achieve improved breath-hold reproducibility. (author)

  13. Collimator trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    Jaszczak, Ronald J.

    1979-01-01

    An improved collimator is provided for a scintillation camera system that employs a detector head for transaxial tomographic scanning. One object of this invention is to significantly reduce the time required to obtain statistically significant data in radioisotope scanning using a scintillation camera. Another is to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a radiation source of known strength without sacrificing spatial resolution. A further object is to reduce the necessary scanning time without degrading the images obtained. The collimator described has apertures defined by septa of different radiation transparency. The septa are aligned to provide greater radiation shielding from gamma radiation travelling within planes perpendicular to the cranial-caudal axis and less radiation shielding from gamma radiation travelling within other planes. Septa may also define apertures such that the collimator provides high spatial resolution of gamma rays traveling within planes perpendicular to the cranial-caudal axis and directed at the detector and high radiation sensitivity to gamma radiation travelling other planes and indicated at the detector. (LL)

  14. Camera aboard 'Friendship 7' photographs John Glenn during spaceflight

    Science.gov (United States)

    1962-01-01

    A camera aboard the 'Friendship 7' Mercury spacecraft photographs Astronaut John H. Glenn Jr. during the Mercury-Atlas 6 spaceflight (00302-3); Photographs Glenn as he uses a photometer to view the sun during sunsent on the MA-6 space flight (00304).

  15. The GCT camera for the Cherenkov Telescope Array

    Science.gov (United States)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  16. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  17. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  18. Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera

    Science.gov (United States)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.

  19. Surface and volume three-dimensional displays of Tc-99m HMPAO brain SPECT images in stroke patients with three-head gamma camera

    International Nuclear Information System (INIS)

    Shih, W.J.; Slevin, J.T.; Schleenbaker, R.E.; Mills, B.J.; Magoun, S.L.; Ryo, U.Y.

    1991-01-01

    This paper evaluates volume and surface 3D displays in Tc-99m HMPAO brain SPECT imaging in stroke patients. Using a triple-head gamma camera interfaced with a 64-bit supercomputer, 20 patients with stroke were studied. Each patient was imaged 30-60 minutes after an intravenous injection of 20 mCi of Tc-99m HMPAO. SPECT images as well as planar images were routinely obtained; volume and surface 3D display then proceeded, with the process requiring 5-10 minutes. Volume and surface 3D displays show the brain from all angles; thus the location and extension of lesion(s) in the brain are much easier to appreciate. While a cerebral lesion(s) was more clearly delineated by surface 3D imaging, crossed cerebellar diaschisis in seven patients was clearly exhibited with volume 3D but not with surface 3D imaging. Volume and surface 3D displays enhance continuity of structures and understanding of spatial relationships

  20. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  1. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  2. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  3. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    Science.gov (United States)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located

  4. Issues in implementing services for a wireless web-enabled digital camera

    Science.gov (United States)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  5. Camera aperture to optimize data collection in nuclear medicine

    International Nuclear Information System (INIS)

    Dupras, G.; Villeneuve, C.

    1979-01-01

    Collection of data with a large field of view camera can cause problems when a small organ like the heart is to be imaged, especially when high activity is used. A simple, inexpensive mask is described that solves most of these problems. (orig.) [de

  6. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  7. Fast and Practical Head Tracking in Brain Imaging with Time-of-Flight Camera

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Jensen, Rasmus Ramsbøl

    2013-01-01

    scanners. Particularly in MRI and PET, the newest generation of TOF cameras could become a method of tracking small and large scale patient movement in a fast and user friendly way required in clinical environments. We present a novel methodology for fast tracking from TOF point clouds without the need...

  8. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    Science.gov (United States)

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring

  9. The Janus Head Article - On Quality in the Documentation Process

    Directory of Open Access Journals (Sweden)

    Henrik Andersen

    2006-03-01

    Full Text Available The god Janus in Greek mythology was a two-faced god; each face had its own view of the world. Our idea behind the Janus Head article is to give you two different and maybe even contradicting views on a certain topic. In this issue the topic is quality in the documentation process. In the first half of this issue’s Janus Head Article translators from the international company Grundfos give us their view of quality and how quality is managed in the documentation process at Grundfos. In the second half of the Janus Head Article scholars from the University of Southern Denmark describe and discuss quality in the documentation process at Grundfos from a researcher’s point of view.

  10. The Janus Head Article - On Quality in the Documentation Process

    Directory of Open Access Journals (Sweden)

    Henrik Andersen

    2012-08-01

    Full Text Available The god Janus in Greek mythology was a two-faced god; each face had its own view of the world. Our idea behind the Janus Head article is to give you two different and maybe even contradicting views on a certain topic. In this issue the topic is quality in the documentation process. In the first half of this issue’s Janus Head Article translators from the international company Grundfos give us their view of quality and how quality is managed in the documentation process at Grundfos. In the second half of the Janus Head Article scholars from the University of Southern Denmark describe and discuss quality in the documentation process at Grundfos from a researcher’s point of view.

  11. Low power multi-camera system and algorithms for automated threat detection

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  12. Borehole camera technology and its application in the Three Gorges Project

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.Y.; Sheng, Q.; Ge, X.R. [Chinese Academy of Sciences, Inst. of Rock and Soil Mechanics, Wuhan (China); Law, K.T. [Carleton Univ., Ottawa, ON (Canada)

    2002-07-01

    The China's Three Gorges Project is the world's largest hydropower project, consisting of a 1,983-meter long and 185-meter high dam and 26 power generating units. Borehole examination has been conducted at the site to ensure stability of the slope of the ship lock used for navigation. This paper describes 2 systems for borehole inspection and viewing. Both methods of camera borehole technology provide a unique way for geologic engineers to observe the condition inside a borehole. The Axial-View Borehole Television (AVBTV) provides real-time frontal view of the borehole ahead of the probe, making it possible to detect where holes are blocked and to see cracks and other distinctive features in the strata. The Digital Panoramic Borehole Camera System (DPBCS) can collect, measure, save, analyze, manage and displace geological information about a borehole. It can also be used to determine the orientation of discontinuity, generate unrolled image and virtual core graph and conduct statistical analysis. Both camera systems have been demonstrated successfully at the Three Gorges Project for qualitative description of the borehole as well as for quantitative analysis of cracks existing in the rock. It has been determined that most of the cracks dip in the same general direction as the northern slope of the permanent ship lock of the Three Gorges Project. 12 refs., 1 tab., 9 figs.

  13. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    Directory of Open Access Journals (Sweden)

    Yi-Ge Fu

    2014-01-01

    Full Text Available As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc. has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1 deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2 deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm.

  14. Modeling of a compliant joint in a Magnetic Levitation System for an endoscopic camera

    NARCIS (Netherlands)

    Simi, M.; Tolou, N.; Valdastri, P.; Herder, J.L.; Menciassi, A.; Dario, P.

    2012-01-01

    A novel compliant Magnetic Levitation System (MLS) for a wired miniature surgical camera robot was designed, modeled and fabricated. The robot is composed of two main parts, head and tail, linked by a compliant beam. The tail module embeds two magnets for anchoring and manual rough translation. The

  15. Multi-Head Very High Power Strobe System For Motion Picture Special Effects

    Science.gov (United States)

    Lovoi, P. A.; Fink, Michael L.

    1983-10-01

    A very large camera synchronizable strobe system has been developed for motion picture special effects. This system, the largest ever built, was delivered to MGM/UA to be used in the movie "War Games". The system consists of 12 individual strobe heads and a power supply distribution system. Each strobe head operates independently and may be flashed up to 24 times per second under computer control. An energy of 480 Joules per flash is used in six strobe heads and 240 Joules per flash in the remaining six strobe heads. The beam pattern is rectangular with a FWHM of 60° x 48°.

  16. A multicenter prospective cohort study on camera navigation training for key user groups in minimally invasive surgery

    NARCIS (Netherlands)

    Graafland, Maurits; Bok, Kiki; Schreuder, Henk W. R.; Schijven, Marlies P.

    2014-01-01

    Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents,

  17. The views of heads of schools of nursing about mental health nursing content in undergraduate programs.

    Science.gov (United States)

    Happell, Brenda; McAllister, Margaret

    2014-05-01

    Criticisms about the mental health nursing content of Bachelor of Nursing programs have been common since the introduction of comprehensive nursing education in Australia. Most criticism has come from the mental health nursing sector and the views of key stakeholders have not been systematically reported. Heads of Schools of Nursing have considerable influence over the content of nursing programs, and their perspectives must be part of ongoing discussions about the educational preparation of nurses. This article reports the findings of a qualitative exploratory study, involving in-depth interviews with Heads of Schools of Nursing from Queensland, Australia. Thematic data analysis revealed two main themes: Realising the Goal? and Influencing Factors. Overall, participants did not believe current programs were preparing graduates for beginning level practice in mental health settings. In particular, participants believed that the quality of mental health content was influenced by the overcrowded curriculum, the availability of quality clinical placements, the strength of the mental health team, and the degree of consumer focus. The findings suggest the current model of nursing education in Australia does not provide an adequate foundation for mental health nursing practice and alternative approaches should be pursued as a matter of urgency.

  18. UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER

    Directory of Open Access Journals (Sweden)

    M. Hillemann

    2017-08-01

    Full Text Available Unmanned Aerial Vehicle (UAV with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30 mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8° and 6.4 mm.

  19. Validation of quantitative brain dopamine D2 receptor imaging with a conventional single-head SPET camera

    International Nuclear Information System (INIS)

    Nikkinen, P.; Liewendahl, K.; Savolainen, S.; Launes, J.

    1993-01-01

    Phantom measurements were performed with a conventional single-head single-photon emission tomography (SPET) camera in order to validate the relevance of the basal ganglia/frontal cortex iodine-123 iodobenzamide (IBZM) uptake ratios measured in patients. Inside a cylindrical phantom (diameter 22 cm), two cylinders with a diameter of 3.3 cm were inserted. The activity concentrations of the cylinders ranged from 6.0 to 22.6 kBq/ml and the cylinder/background activity ratios varied from 1.4 to 3.8. From reconstructed SPET images the cylinder/background activity ratios were calculated using three different regions of interest (ROIs). A linear relationship between the measured activity ratio and the true activity ratio was obtained. In patient studies, basal ganglia/frontal cortex IBZM uptake ratios determined from the reconstructed slices using attentuation correction prior to reconstruction were 1.30 ±0.03 in idiopathic Parkinson's disease (n = 9), 1,33 ±0.09 in infantile and juvenile neuronal ceroid lipofuscinosis (n = 7) and 1.34 ±0.05 in narcolepsy (n = 8). Patients with Huntington's disease had significantly lower ratios (1.09 ±0.04, n = 5). The corrected basal ganglia/frontal cortex ratios, determined using linear regression, were about 80 % higher. The use of dual-window scatter correction increased the measured ratios by about 10 %. Although comprehensive correction methods can further improve the resolution in SPET images, the resolution of the SPET system used by us (1.5 - 2 cm) will determine what is achievable in basal ganglia D2 receptor imaging. (orig.)

  20. Validation of quantitative brain dopamine D2 receptor imaging with a conventional single-head SPET camera

    Energy Technology Data Exchange (ETDEWEB)

    Nikkinen, P [Helsinki Univ. (Finland). Dept. of Clinical Chemistry; Liewendahl, K [Helsinki Univ. (Finland). Dept. of Clinical Chemistry; Savolainen, S [Helsinki Univ. (Finland). Dept. of Physics; Launes, J [Helsinki Univ. (Finland). Dept. of Neurology

    1993-08-01

    Phantom measurements were performed with a conventional single-head single-photon emission tomography (SPET) camera in order to validate the relevance of the basal ganglia/frontal cortex iodine-123 iodobenzamide (IBZM) uptake ratios measured in patients. Inside a cylindrical phantom (diameter 22 cm), two cylinders with a diameter of 3.3 cm were inserted. The activity concentrations of the cylinders ranged from 6.0 to 22.6 kBq/ml and the cylinder/background activity ratios varied from 1.4 to 3.8. From reconstructed SPET images the cylinder/background activity ratios were calculated using three different regions of interest (ROIs). A linear relationship between the measured activity ratio and the true activity ratio was obtained. In patient studies, basal ganglia/frontal cortex IBZM uptake ratios determined from the reconstructed slices using attentuation correction prior to reconstruction were 1.30 [+-]0.03 in idiopathic Parkinson's disease (n = 9), 1,33 [+-]0.09 in infantile and juvenile neuronal ceroid lipofuscinosis (n = 7) and 1.34 [+-]0.05 in narcolepsy (n = 8). Patients with Huntington's disease had significantly lower ratios (1.09 [+-]0.04, n = 5). The corrected basal ganglia/frontal cortex ratios, determined using linear regression, were about 80 % higher. The use of dual-window scatter correction increased the measured ratios by about 10 %. Although comprehensive correction methods can further improve the resolution in SPET images, the resolution of the SPET system used by us (1.5 - 2 cm) will determine what is achievable in basal ganglia D2 receptor imaging. (orig.)

  1. Design of a Day/Night Star Camera System

    Science.gov (United States)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  2. NUKAB system use with the PICKER DYNA CAMERA II

    International Nuclear Information System (INIS)

    Collet, H.; Faurous, P.; Lehn, A.; Suquet, P.

    Present-day data processing units connected to scintillation gamma cameras can make use of cabled programme or recorded programme systems. The NUKAB system calls on the latter technique. The central element of the data processing unit, connected to the PICKER DYNA CAMERA II output, consists of a DIGITAL PDP 8E computer with 12-bit technological words. The use of a 12-bit technological format restricts the possibilities of digitalisation, 64x64 images representing the practical limit. However the NUKAB system appears well suited to the processing of data from gamma cameras at present in service. The addition of output terminals of the tracing panel type should widen the possibilities of the system. It seems that the 64x64 format is not a handicap in view of the resolution power of the detectors [fr

  3. Single camera photogrammetry system for EEG electrode identification and localization.

    Science.gov (United States)

    Baysal, Uğur; Sengül, Gökhan

    2010-04-01

    In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.

  4. Multiview face detection based on position estimation over multicamera surveillance system

    Science.gov (United States)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  5. Full Body Pose Estimation During Occlusion using Multiple Cameras

    DEFF Research Database (Denmark)

    Fihl, Preben; Cosar, Serhan

    people is a very challenging problem for methods based on pictorials structure as for any other monocular pose estimation method. In this report we present work on a multi-view approach based on pictorial structures that integrate low level information from multiple calibrated cameras to improve the 2D...

  6. The cloud monitor by an infrared camera at the Telescope Array experiment

    International Nuclear Information System (INIS)

    Shibata, F.

    2011-01-01

    The mesurement of the extensive air shower using the fluorescence detectors (FDs) is affected by the condition of the atmosphere. In particular, FD aperture is limited by cloudiness. If cloud exists on the light path from extensive air shower to FDs, fluorescence photons will be absorbed drastically. Therefore cloudiness of FD's field of view (FOV) is one of important quality cut condition in FD analysis. In the Telescope Array (TA), an infrared (IR) camera with 320x236 pixels and a filed of view of 25.8 deg. x19.5 deg. has been installed at an observation site for cloud monitoring during FD observations. This IR camera measures temperature of the sky every 30 min during FD observation. IR camera is mounted on steering table, which can be changed in elevation and azimuthal direction. Clouds can be seen at a higher temperature than areas of cloudless sky from these temperature maps. In this paper, we discuss the quality of the cloud monitoring data, the analysis method, and current quality cut condition of cloudiness in FD analysis.

  7. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    OpenAIRE

    Steven Nicholas Graves, MA; Deana Saleh Shenaq, MD; Alexander J. Langerman, MD; David H. Song, MD, MBA

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons? point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon?s perspective using the GoPro App. The camera was used to ...

  8. STS-51B/Challenger - Isolated Launch View

    Science.gov (United States)

    1985-01-01

    Live footage of various isolated launch views is seen. Views of the Space Shuttle Challenger are shown from different camera sites such as the VAB (Vehicle Assembly Building) Roof, Pad Perimeter, Helicopter, Convoy, and Midfield. Also shown from different cameras is the re-entry and landing of the shuttle at Kennedy Space Center (KSC). Footage also includes the ground recovery crew as they travel to the spacecraft. Challengers crew, Commander Robert F. Overmyer, Pilot Frederick D. Gregory, Mission Specialists Don L. Lind, Norman E. Thagard, and William E. Thornton, and Payload Specialists Lodewijk van den Berg, and Taylor G. Wang are also seen leaving the craft.

  9. SHOK—The First Russian Wide-Field Optical Camera in Space

    Science.gov (United States)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  10. Optical registration of spaceborne low light remote sensing camera

    Science.gov (United States)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  11. The supinated mediolateral radiograph for detection of humeral head osteochondrosis in the dog

    International Nuclear Information System (INIS)

    Callahan, T.F.; Ackerman, N.

    1985-01-01

    Mediolateral and supinated mediolateral radiographs of the shoulder joint were compared in 19 dogs. Twenty shoulders, representing 15 dogs (5 had bilateral lesions), had osteochondrosis of the humeral head. The flattened humeral head and subchondral defect were detectable in both views in all affected shoulders. The lesions were slightly more easily detected in the supinated view. The supinated view more consistently demonstrated the presence of a calcified cartilage flap and therefore, could be useful in determining a course of therapy. In four dogs (8 shoulders) without osteochondrosis and six normal shoulders from affected dogs, there were no instances in which a shoulder appeared normal on one view, but demonstrated a lesion on the other. The supinated view should be obtained in addition to the mediolateral view in dogs with osteochondrosis of the humeral head

  12. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  13. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  14. Preliminary Experience with Small Animal SPECT Imaging on Clinical Gamma Cameras

    Directory of Open Access Journals (Sweden)

    P. Aguiar

    2014-01-01

    Full Text Available The traditional lack of techniques suitable for in vivo imaging has induced a great interest in molecular imaging for preclinical research. Nevertheless, its use spreads slowly due to the difficulties in justifying the high cost of the current dedicated preclinical scanners. An alternative for lowering the costs is to repurpose old clinical gamma cameras to be used for preclinical imaging. In this paper we assess the performance of a portable device, that is, working coupled to a single-head clinical gamma camera, and we present our preliminary experience in several small animal applications. Our findings, based on phantom experiments and animal studies, provided an image quality, in terms of contrast-noise trade-off, comparable to dedicated preclinical pinhole-based scanners. We feel that our portable device offers an opportunity for recycling the widespread availability of clinical gamma cameras in nuclear medicine departments to be used in small animal SPECT imaging and we hope that it can contribute to spreading the use of preclinical imaging within institutions on tight budgets.

  15. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... rays). top of page What are some common uses of the procedure? CT scanning of the head ... is done because a potential abnormality needs further evaluation with additional views or a special imaging technique. ...

  16. Mars Orbiter Camera Views the 'Face on Mars' - Comparison with Viking

    Science.gov (United States)

    1998-01-01

    processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking. The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales. The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth. A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  17. Adaptation Computing Parameters of Pan-Tilt-Zoom Cameras for Traffic Monitoring

    Directory of Open Access Journals (Sweden)

    Ya Lin WU

    2014-01-01

    Full Text Available The Closed- CIRCUIT television (CCTV cameras have been widely used in recent years for traffic monitoring and surveillance applications. We can use CCTV cameras to extract automatically real-time traffic parameters according to the image processing and tracking technologies. Especially, the pan-tilt-zoom (PTZ cameras can provide flexible view selection as well as a wider observation range, and this makes the traffic parameters can be accurately calculated. Therefore, that the parameters of PTZ cameras are calibrated plays an important role in vision-based traffic applications. However, in the specific traffic environment, which is that the license plate number of the illegal parking is located, the parameters of PTZ cameras have to be updated according to the position and distance of illegal parking. In proposed traffic monitoring systems, we use the ordinary webcam and PTZ camera. We get vanishing-point of traffic lane lines in the pixel-based coordinate system by fixed webcam. The parameters of PTZ camera can be initialized by distance of the traffic monitoring and specific objectives and vanishing-point. And then we can use the coordinate position of the illegally parked car to update the parameters of PTZ camera and then get the real word coordinate position of the illegally parked car and use it to compute the distance. The result shows the error of the tested distance and real distance is only 0.2064 meter.

  18. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    Science.gov (United States)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  19. NetView technical research

    Science.gov (United States)

    1993-01-01

    This is the Final Technical Report for the NetView Technical Research task. This report is prepared in accordance with Contract Data Requirements List (CDRL) item A002. NetView assistance was provided and details are presented under the following headings: NetView Management Systems (NMS) project tasks; WBAFB IBM 3090; WPAFB AMDAHL; WPAFB IBM 3084; Hill AFB; McClellan AFB AMDAHL; McClellan AFB IBM 3090; and Warner-Robins AFB.

  20. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    Science.gov (United States)

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  1. Remote removal of an obstruction from FFTF [Fast Flux Test Facility] in-service inspection camera track

    International Nuclear Information System (INIS)

    Gibbons, P.W.

    1990-11-01

    Remote techniques and special equipment were used to clear the path of a closed-circuit television camera system that travels on a monorail track around the reactor vessel support arm structure. A tangle of wire-wrapped instrumentation tubing had been inadvertently inserted through a dislocated guide-tube expansion joint and into the camera track area. An externally driven auger device, mounted on the track ahead of the camera to view the procedure, was used to retrieve the tubing. 6 figs

  2. The status of MUSIC: the multiwavelength sub-millimeter inductance camera

    Science.gov (United States)

    Sayers, Jack; Bockstiegel, Clint; Brugger, Spencer; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Gill, Amandeep K.; Glenn, Jason; Golwala, Sunil R.; Hollister, Matthew I.; Lam, Albert; LeDuc, Henry G.; Maloney, Philip R.; Mazin, Benjamin A.; McHugh, Sean G.; Miller, David A.; Mroczkowski, Anthony K.; Noroozian, Omid; Nguyen, Hien Trong; Schlaerth, James A.; Siegel, Seth R.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2014-08-01

    The Multiwavelength Sub/millimeter Inductance Camera (MUSIC) is a four-band photometric imaging camera operating from the Caltech Submillimeter Observatory (CSO). MUSIC is designed to utilize 2304 microwave kinetic inductance detectors (MKIDs), with 576 MKIDs for each observing band centered on 150, 230, 290, and 350 GHz. MUSIC's field of view (FOV) is 14' square, and the point-spread functions (PSFs) in the four observing bands have 45'', 31'', 25'', and 22'' full-widths at half maximum (FWHM). The camera was installed in April 2012 with 25% of its nominal detector count in each band, and has subsequently completed three short sets of engineering observations and one longer duration set of early science observations. Recent results from on-sky characterization of the instrument during these observing runs are presented, including achieved map- based sensitivities from deep integrations, along with results from lab-based measurements made during the same period. In addition, recent upgrades to MUSIC, which are expected to significantly improve the sensitivity of the camera, are described.

  3. Status of the NectarCAM camera project

    International Nuclear Information System (INIS)

    Glicenstein, J.F.; Delagnes, E.; Fesquet, M.; Louis, F.; Moudden, Y.; Moulin, E.; Nunio, F.; Sizun, P.

    2014-01-01

    NectarCAM is a camera designed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range 100 GeV to 30 TeV. It has a modular design based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 7 to 8 degrees. Each module includes the photomultiplier bases, High Voltage supply, pre-amplifier, trigger, readout and Thernet transceiver. Events recorded last between a few nanoseconds and tens of nanoseconds. A flexible trigger scheme allows to read out very long events. NectarCAM can sustain a data rate of 10 kHz. The camera concept, the design and tests of the various sub-components and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, the cooling of electronics, read-out, clock distribution, slow control, data-acquisition, trigger, monitoring and services. A 133-pixel prototype with full scale mechanics, cooling, data acquisition and slow control will be built at the end of 2014. (authors)

  4. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  5. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  6. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  7. Pretreatment organ function in patients with advanced head and neck cancer: clinical outcome measures and patients' views

    Directory of Open Access Journals (Sweden)

    Rasch Coen RN

    2009-11-01

    Full Text Available Abstract Background Aim of this study is to thoroughly assess pretreatment organ function in advanced head and neck cancer through various clinical outcome measures and patients' views. Methods A comprehensive, multidimensional assessment was used, that included quality of life, swallowing, mouth opening, and weight changes. Fifty-five patients with stage III-IV disease were entered in this study prior to organ preserving (chemoradiation treatment. Results All patients showed pretreatment abnormalities or problems, identified by one or more of the outcome measures. Most frequent problems concerned swallowing, pain, and weight loss. Interestingly, clinical outcome measures and patients' perception did no always concur. E.g. videofluoroscopy identified aspiration and laryngeal penetration in 18% of the patients, whereas only 7 patients (13% perceived this as problematic; only 2 out of 7 patients with objective trismus actually perceived trismus. Conclusion The assessment identified several problems already pre-treatment, in this patient population. A thorough assessment of both clinical measures and patients' views appears to be necessary to gain insight in all (perceived pre-existing functional and quality of life problems.

  8. Performance characteristics of the novel PETRRA positron camera

    CERN Document Server

    Ott, R J; Erlandsson, K; Reader, A; Duxbury, D; Bateman, J; Stephenson, R; Spill, E

    2002-01-01

    The PETRRA positron camera consists of two 60 cmx40 cm annihilation photon detectors mounted on a rotating gantry. Each detector contains large BaF sub 2 scintillators interfaced to large area multiwire proportional chambers filled with a photo-sensitive vapour (tetrakis-(dimethylamino)-ethylene). The spatial resolution of the camera has been measured as 6.5+-1.0 mm FWHM throughout the sensitive field-of-view (FoV), the timing resolution is between 7 and 10 ns FWHM and the detection efficiency for annihilation photons is approx 30% per detector. The count-rates obtained, from a 20 cm diameter by 11 cm long water filled phantom containing 90 MBq of sup 1 sup 8 F, were approx 1.25x10 sup 6 singles and approx 1.1x10 sup 5 cps raw coincidences, limited only by the read-out system dead-time of approx 4 mu s. The count-rate performance, sensitivity and large FoV make the camera ideal for whole-body imaging in oncology.

  9. INFLUENCE OF MECHANICAL ERRORS IN A ZOOM CAMERA

    Directory of Open Access Journals (Sweden)

    Alfredo Gardel

    2011-05-01

    Full Text Available As it is well known, varying the focus and zoom of a camera lens system changes the alignment of the lens components resulting in a displacement of the image centre and field of view. Thus, knowledge of how the image centre shifts may be important for some aspects of camera calibration. As shown in other papers, the pinhole model is not adequate for zoom lenses. To ensure a calibration model for these lenses, the calibration parameters must be adjusted. The geometrical modelling of a zoom lens is realized from its lens specifications. The influence on the calibration parameters is calculated by introducing mechanical errors in the mobile lenses. Figures are given describing the errors obtained in the principal point coordinates and also in its standard deviation. A comparison is then made with the errors that come from the incorrect detection of the calibration points. It is concluded that mechanical errors of actual zoom lenses can be neglected in the calibration process because detection errors have more influence on the camera parameters.

  10. Evaluation of a high-resolution, breast-specific, small-field-of-view gamma camera for the detection of breast cancer

    International Nuclear Information System (INIS)

    Brem, R.F.; Kieper, D.A.; Rapelyea, J.A.; Majewski, S.

    2003-01-01

    Purpose: The purpose of our study is to review the state of the art in nuclear medicine imaging of the breast (scintimammography) and to evaluate a novel, high-resolution, breast-specific gamma camera (HRBGC) for the detection of suspicious breast lesions. Materials and Methods: Fifty patients with 58 breast lesions in whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a HRBGC prototype. Nuclear studies were prospectively classified as negative (normal/benign) or positive (suspicious/malignant) by two radiologists, blinded to mammographic and histologic results with both the conventional and high-resolution. All lesions were confirmed by pathology. Results: Included in this study were 30 benign and 28 malignant lesions. The sensitivity for detection of breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. Specificity of both systems was 93.3% (28/30). In the 18 nonpalpable cancers, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and HRBGC, respectively. In cancers ≤ 1cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four of the cancers (median size, 8.5 mm) detected with the HRBGC were missed by the conventional camera Conclusion: Evaluation of indeterminate breasts lesions with a high resolution, breast-specific gamma camera results in improved sensitivity for the detection of cancer with greater improvement demonstrated in nonpalpable and ≤1 cm cancers

  11. A novel fully integrated handheld gamma camera

    International Nuclear Information System (INIS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-01-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  12. A novel fully integrated handheld gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Massari, R.; Ucci, A.; Campisi, C. [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy); Scopinaro, F. [University of Rome “La Sapienza”, S. Andrea Hospital, Rome (Italy); Soluri, A., E-mail: alessandro.soluri@ibb.cnr.it [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy)

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  13. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    International Nuclear Information System (INIS)

    Cho, Jai Wan; Jeong, Kyung Min

    2012-01-01

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  14. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  15. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    Science.gov (United States)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  16. Modeling of the scattered radiation of the head of an ALE by an extended source Gaussian extrafocal; Modelizacion de la radiacion dispersa del cabezal de un A. L. E. mediante una fuentes extrafocal extendida gasussiana

    Energy Technology Data Exchange (ETDEWEB)

    Quinones Rodriguez, L. A.; Richarte Reina, J. M.; Castro Ramirez, I. J.; Iborra Oquendo, M.; Angulo Pain, E.; Urena Llinares, A.; Lupiani Castellanos, J.; Ramos Cabalalero, L. J.

    2011-07-01

    The flattening filter is the main source of scattered radiation in an accelerator, there is also an important contribution of the primary collimator and a lower order of monitors and cameras secondary collimation. This scattered radiation of the head can be up to 12% of the radiation emitted by the accelerator and its characterization by a source extra focal extended to predict values for the field factors and the shape of the penumbra of the radiation profiles, based on the part of this virtual source view from our detector.

  17. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  18. Omnidirectional vision applied to Unmanned Aerial Vehicles (UAVs) attitude and heading estimation

    OpenAIRE

    Mondragon, Ivan F.; Campoy, Pascual; Martinez, Carol; Olivares Mendez, Miguel Angel

    2010-01-01

    This paper presents an aircraft attitude and heading estimator using catadioptric images as a principal sensor for UAV or as a redundant system for IMU (Inertial Measure Unit) and gyro sensors. First, we explain how the unified theory for central catadioptric cameras is used for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV's attitude. Then, we use appearance images to obtain a visual co...

  19. Tridimensional pose estimation of a person head

    International Nuclear Information System (INIS)

    Perez Berenguer, Elisa; Soria, Carlos; Nasisi, Oscar; Mut, Vicente

    2007-01-01

    In this work, we present a method for estimating 3-D motion parameters; this method provides an alternative way for 3D head pose estimation from image sequence in the current computer vision literature. This method is robust over extended sequences and large head motions and accurately extracts the orientation angles of head from a single view. Experimental results show that this tracking system works well for development a human-computer interface for people that possess severe motor incapacity

  20. View Ahead After Spirit's Sol 1861 Drive (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  1. Trunk- and head-centred spatial coordinates do not affect free-viewing perceptual asymmetries.

    Science.gov (United States)

    Nicholls, Michael E R; Mattingley, Jason B; Bradshaw, John L; Krins, Phillip W

    2003-11-01

    Turning the trunk or head to the left can reduce the severity of leftward neglect. This study sought to determine whether turning the trunk or head to the right would reduce pseudoneglect: A phenomenon where normal participants underestimate the rightward features of a stimulus. Participants made luminance judgements of two mirror-reversed greyscales stimuli. A preference for selecting the stimulus dark on the left was found. The effect of trunk-centred coordinates was examined in Expt. 1 by facing the head toward the display and turning the trunk to the left, right or toward the display. Head-centred coordinates were examined in Expt. 2 by directing the eyes toward the display and then turning the head and trunk. No effect of rotation was observed. It was concluded that the leftward bias for the greyscales task could be based on an object-centred attentional bias or left-to-right eye scanning habits.

  2. Synchronization of streak and framing camera measurements of an intense relativistic electron beam propagating through gas

    International Nuclear Information System (INIS)

    Weidman, D.J.; Murphy, D.P.; Myers, M.C.; Meger, R.A.

    1994-01-01

    The expansion of the radius of a 5 MeV, 20 kA, 40 ns electron beam from SuperIBEX during propagation through gas is being measured. The beam is generated, conditions, equilibrated, and then passed through a thin foil that produces Cherenkov light, which is recorded by a streak camera. At a second location, the beam hits another Cherenkov emitter, which is viewed by a framing camera. Measurements at these two locations can provide a time-resolved measure of the beam expansion. The two measurements, however, must be synchronized with each other, because the beam radius is not constant throughout the pulse due to variations in beam current and energy. To correlate the timing of the two diagnostics, several shots have been taken with both diagnostics viewing Cherenkov light from the same foil. Experimental measurements of the Cherenkov light from one foil viewed by both diagnostics will be presented to demonstrate the feasibility of correlating the diagnostics with each other. Streak camera data showing the optical fiducial, as well as the final correlation of the two diagnostics, will also be presented. Preliminary beam radius measurements from Cherenkov light measured at two locations will be shown

  3. Precise Head Tracking in Hearing Applications

    Science.gov (United States)

    Helle, A. M.; Pilinski, J.; Luhmann, T.

    2015-05-01

    The paper gives an overview about two research projects, both dealing with optical head tracking in hearing applications. As part of the project "Development of a real-time low-cost tracking system for medical and audiological problems (ELCoT)" a cost-effective single camera 3D tracking system has been developed which enables the detection of arm and head movements of human patients. Amongst others, the measuring system is designed for a new hearing test (based on the "Mainzer Kindertisch"), which analyzes the directional hearing capabilities of children in cooperation with the research project ERKI (Evaluation of acoustic sound source localization for children). As part of the research project framework "Hearing in everyday life (HALLO)" a stereo tracking system is being used for analyzing the head movement of human patients during complex acoustic events. Together with the consideration of biosignals like skin conductance the speech comprehension and listening effort of persons with reduced hearing ability, especially in situations with background noise, is evaluated. For both projects the system design, accuracy aspects and results of practical tests are discussed.

  4. Real-Time View Correction for Mobile Devices.

    Science.gov (United States)

    Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc

    2017-11-01

    We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.

  5. Correction of head motion artifacts in SPECT with fully 3-D OS-EM reconstruction

    International Nuclear Information System (INIS)

    Fulton, R.R.

    1998-01-01

    Full text: A method which relies on continuous monitoring of head position has been developed to correct for head motion in SPECT studies of the brain. Head position and orientation are monitored during data acquisition by an inexpensive head tracking system (ADL-1, Shooting Star Technology, Rosedale, British Colombia). Motion correction involves changing the projection geometry to compensate for motion (using data from the head tracker), and reconstructing with a fully 3-D OS-EM algorithm. The reconstruction algorithm can accommodate any number of movements and any projection geometry. A single iteration of 3-D OS-EM using all available projections provides a satisfactory 3-D reconstruction, essentially free of motion artifacts. The method has been validated in studies of the 3-D Hoffman brain phantom. Multiple 36- degree acquisitions, each with the phantom in a different position, were performed on a Trionix triple head camera. Movements were simulated by combining projections from the different acquisitions. Accuracy was assessed by comparison with a motion-free reconstruction, visually and by calculating mean squared error (MSE). Motion correction reduced distortion perceptibly and, depending on the motions applied, improved MSE by up to an order of magnitude. Three-dimensional reconstruction of the 128 x 128 x 128 data set took 2- minutes on a SUN Ultra 1 workstation. This motion correction technique can be retro-fitted to existing SPECT systems and could be incorporated in future SPECT camera designs. It appears to be applicable in PET as well as SPECT, to be able to correct for any head movements, and to have the potential to improve the accuracy of tomographic brain studies under clinical imaging conditions

  6. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  7. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  8. Head first Ajax

    CERN Document Server

    Riordan, Rebecca M

    2008-01-01

    Ajax is no longer an experimental approach to website development, but the key to building browser-based applications that form the cornerstone of Web 2.0. Head First Ajax gives you an up-to-date perspective that lets you see exactly what you can do -- and has been done -- with Ajax. With it, you get a highly practical, in-depth, and mature view of what is now a mature development approach. Using the unique and highly effective visual format that has turned Head First titles into runaway bestsellers, this book offers a big picture overview to introduce Ajax, and then explores the use of ind

  9. A Compton camera application for the GAMOS GEANT4-based framework

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J., E-mail: ljh@ns.ph.liv.ac.uk [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Arce, P. [Department of Basic Research, CIEMAT, Madrid (Spain); Judson, D.S.; Boston, A.J.; Boston, H.C.; Cresswell, J.R.; Dormand, J.; Jones, M.; Nolan, P.J.; Sampson, J.A.; Scraggs, D.P.; Sweeney, A. [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom)

    2012-04-11

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  10. Spectroscopic gamma camera for use in high dose environments

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Yuichiro, E-mail: yuichiro.ueno.bv@hitachi.com [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Fujishima, Yasutake; Kometani, Yutaka [Hitachi Works, Hitachi-GE Nuclear Energy, Ltd., Hitachi-shi, Ibaraki-ken (Japan); Suzuki, Yasuhiko [Measuring Systems Engineering Dept., Hitachi Aloka Medical, Ltd., Ome-shi, Tokyo (Japan); Umegaki, Kikuo [Faculty of Engineering, Hokkaido University, Sapporo-shi, Hokkaido (Japan)

    2016-06-21

    We developed a pinhole gamma camera to measure distributions of radioactive material contaminants and to identify radionuclides in extraordinarily high dose regions (1000 mSv/h). The developed gamma camera is characterized by: (1) tolerance for high dose rate environments; (2) high spatial and spectral resolution for identifying unknown contaminating sources; and (3) good usability for being carried on a robot and remotely controlled. These are achieved by using a compact pixelated detector module with CdTe semiconductors, efficient shielding, and a fine resolution pinhole collimator. The gamma camera weighs less than 100 kg, and its field of view is an 8 m square in the case of a distance of 10 m and its image is divided into 256 (16×16) pixels. From the laboratory test, we found the energy resolution at the 662 keV photopeak was 2.3% FWHM, which is enough to identify the radionuclides. We found that the count rate per background dose rate was 220 cps h/mSv and the maximum count rate was 300 kcps, so the maximum dose rate of the environment where the gamma camera can be operated was calculated as 1400 mSv/h. We investigated the reactor building of Unit 1 at the Fukushima Dai-ichi Nuclear Power Plant using the gamma camera and could identify the unknown contaminating source in the dose rate environment that was as high as 659 mSv/h.

  11. Piloting the feasibility of head-mounted video technology to augment student feedback during simulated clinical decision-making: An observational design pilot study.

    Science.gov (United States)

    Forbes, Helen; Bucknall, Tracey K; Hutchinson, Alison M

    2016-04-01

    Clinical decision-making is a complex activity that is critical to patient safety. Simulation, augmented by feedback, affords learners the opportunity to learn critical clinical decision-making skills. More detailed feedback following simulation exercises has the potential to further enhance student learning, particularly in relation to developing improved clinical decision-making skills. To investigate the feasibility of head-mounted video camera recordings, to augment feedback, following acute patient deterioration simulations. Pilot study using an observational design. Ten final-year nursing students participated in three simulation exercises, each focussed on detection and management of patient deterioration. Two observers collected behavioural data using an adapted version of Gaba's Clinical Simulation Tool, to provide verbal feedback to each participant, following each simulation exercise. Participants wore a head-mounted video camera during the second simulation exercise only. Video recordings were replayed to participants to augment feedback, following the second simulation exercise. Data were collected on: participant performance (observed and perceived); participant perceptions of feedback methods; and head-mounted video camera recording feasibility and capability for detailed audio-visual feedback. Management of patient deterioration improved for six participants (60%). Increased perceptions of confidence (70%) and competence (80%), were reported by the majority of participants. Few participants (20%) agreed that the video recording specifically enhanced their learning. The visual field of the head-mounted video camera was not always synchronised with the participant's field of vision, thus affecting the usefulness of some recordings. The usefulness of the video recordings, to enhance verbal feedback to participants on detection and management of simulated patient deterioration, was inconclusive. Modification of the video camera glasses, to improve

  12. Geometric database maintenance using CCTV cameras and overlay graphics

    Science.gov (United States)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  13. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  14. Morphometric Optic Nerve Head Analysis in Glaucoma Patients: A Comparison between the Simultaneous Nonmydriatic Stereoscopic Fundus Camera (Kowa Nonmyd WX3D and the Heidelberg Scanning Laser Ophthalmoscope (HRT III

    Directory of Open Access Journals (Sweden)

    Siegfried Mariacher

    2016-01-01

    Full Text Available Purpose. To investigate the agreement between morphometric optic nerve head parameters assessed with the confocal laser ophthalmoscope HRT III and the stereoscopic fundus camera Kowa nonmyd WX3D retrospectively. Methods. Morphometric optic nerve head parameters of 40 eyes of 40 patients with primary open angle glaucoma were analyzed regarding their vertical cup-to-disc-ratio (CDR. Vertical CDR, disc area, cup volume, rim volume, and maximum cup depth were assessed with both devices by one examiner. Mean bias and limits of agreement (95% CI were obtained using scatter plots and Bland-Altman analysis. Results. Overall vertical CDR comparison between HRT III and Kowa nonmyd WX3D measurements showed a mean difference (limits of agreement of −0.06 (−0.36 to 0.24. For the CDR < 0.5 group (n=24 mean difference in vertical CDR was −0.14 (−0.34 to 0.06 and for the CDR ≥ 0.5 group (n=16 0.06 (−0.21 to 0.34. Conclusion. This study showed a good agreement between Kowa nonmyd WX3D and HRT III with regard to widely used optic nerve head parameters in patients with glaucomatous optic neuropathy. However, data from Kowa nonmyd WX3D exhibited the tendency to measure larger CDR values than HRT III in the group with CDR < 0.5 group and lower CDR values in the group with CDR ≥ 0.5.

  15. Using computer graphics to design Space Station Freedom viewing

    Science.gov (United States)

    Goldsberry, Betty S.; Lippert, Buddy O.; Mckee, Sandra D.; Lewis, James L., Jr.; Mount, Francis E.

    1993-01-01

    Viewing requirements were identified early in the Space Station Freedom program for both direct viewing via windows and indirect viewing via cameras and closed-circuit television (CCTV). These requirements reside in NASA Program Definition and Requirements Document (PDRD), Section 3: Space Station Systems Requirements. Currently, analyses are addressing the feasibility of direct and indirect viewing. The goal of these analyses is to determine the optimum locations for the windows, cameras, and CCTV's in order to meet established requirements, to adequately support space station assembly, and to operate on-board equipment. PLAID, a three-dimensional computer graphics program developed at NASA JSC, was selected for use as the major tool in these analyses. PLAID provides the capability to simulate the assembly of the station as well as to examine operations as the station evolves. This program has been used successfully as a tool to analyze general viewing conditions for many Space Shuttle elements and can be used for virtually all Space Station components. Additionally, PLAID provides the ability to integrate an anthropometric scale-modeled human (representing a crew member) with interior and exterior architecture.

  16. Radiographic examination of the equine head

    International Nuclear Information System (INIS)

    Park, R.D.

    1993-01-01

    Radiographic examinations of the equine head can be performed with portable x-ray machines. The views comprising the examination depend on the area of the head being examined. With a knowledge of radiographic anatomy and radiographic signs of disease, valuable diagnostic information can be obtained from the radiographic examination. In addition, the radiographic information can also be used to develop a prognosis and determine the most appropriate therapy

  17. Magnetic Resonance Imaging (MRI) -- Head

    Medline Plus

    Full Text Available ... practice. top of page What are some common uses of the procedure? MR imaging of the head ... is done because a potential abnormality needs further evaluation with additional views or a special imaging technique. ...

  18. Diagnosis of myocardial viability by dual-head coincidence gamma camera fluorine-18 fluorodeoxyglucose positron emission tomography with and without non-uniform attenuation correction

    International Nuclear Information System (INIS)

    Nowak, B.; Zimmy, M.; Kaiser, H.-J.; Schaefer, W.; Reinartz, P.; Buell, U.; Schwarz, E.R.; Dahl, J. vom

    2000-01-01

    This study assessed a dual-head coincidence gamma camera (hybrid PET) equipped with single-photon transmission for myocardial fluorine-18 fluorodeoxyglucose (FDG) imaging by comparing this technique with conventional positron emission tomography (PET) using a dedicated ring PET scanner. Twenty-one patients were studied with dedicated FDG ring PET and FDG hybrid PET for evaluation of myocardial glucose metabolism, as well as technetium-99 m tetrofosmin single-photon emission tomography (SPET) to estimate myocardial perfusion. All patients underwent transmitted attenuation correction using germanium-68 rod sources for ring PET and caesium-137 point sources for hybrid PET. Ring PET and hybrid PET emission scans were started 61±12 and 98±15 min, respectively, after administration of 154±31 MBq FDG. Attenuation-corrected images were reconstructed iteratively for ring PET and hybrid PET (ac-hybrid PET), and non-attenuation-corrected images for hybrid PET (non-ac-hybrid PET) only. Tracer distribution was analysed semiquantitatively using a volumetric vector sampling method dividing the left ventricular wall into 13 segments. FDG distribution in non-ac-hybrid PET and ring PET correlated with r=0.36 (P<0.0001), and in ac-hybrid PET and ring PET with r=0.79 (P<0.0001). Non-ac-hybrid PET significantly overestimated FDG uptake in the apical and supra-apical segments, and underestimated FDG uptake in the remaining segments, with the exception of one lateral segment. Ac-hybrid PET significantly overestimated FDG uptake in the apical segment, and underestimated FDG uptake in only three posteroseptal segments. A three-grade score was used to classify diagnosis of viability by FDG PET in 136 segments with reduced perfusion as assessed by SPET. Compared with ring PET, non-ac-hybrid PET showed concordant diagnoses in 80 segments (59%) and ac-hybrid PET in 101 segments (74%) (P<0.001). Agreement between ring PET and non-ac-hybrid PET was best in the basal lateral wall and in the

  19. Automatic helmet-wearing detection for law enforcement using CCTV cameras

    Science.gov (United States)

    Wonghabut, P.; Kumphong, J.; Satiennam, T.; Ung-arunyawee, R.; Leelapatra, W.

    2018-04-01

    The objective of this research is to develop an application for enforcing helmet wearing using CCTV cameras. The developed application aims to help law enforcement by police, and eventually resulting in changing risk behaviours and consequently reducing the number of accidents and its severity. Conceptually, the application software implemented using C++ language and OpenCV library uses two different angle of view CCTV cameras. Video frames recorded by the wide-angle CCTV camera are used to detect motorcyclists. If any motorcyclist without helmet is found, then the zoomed (narrow-angle) CCTV is activated to capture image of the violating motorcyclist and the motorcycle license plate in real time. Captured images are managed by database implemented using MySQL for ticket issuing. The results show that the developed program is able to detect 81% of motorcyclists on various motorcycle types during daytime and night-time. The validation results reveal that the program achieves 74% accuracy in detecting the motorcyclist without helmet.

  20. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  1. Synthetic viewing: Comprehensive work representation making remote work more transparent to the operator

    International Nuclear Information System (INIS)

    Leinemann, K.; Katz, F.; Knueppel, H.

    1994-01-01

    To support the operator remote handling a number of well-developed techniques is available like transmission of forces, movements, and dexterous skills in general using masterslave manipulators equipped with special tools. In addition several types of transporters are available to position manipulators and tools. But there is a serious bottle-neck in viewing: the number of cameras is restricted and the cameras may in most cases not be positioned as to provide sufficient information. In order to improve this situation an integration of closed-loop TV and artificial viewing by sensor controlled computer graphics has been introduced successfully by KfK at JET. This integrated viewing subsystem combines not only those two techniques by providing the two views but also enhances the conventional camera control by a computer graphics model-based control. Practical experience has shown that the concept of viewing needs to be extended. Just seeing where things are is insufficient for the operators to perform their remote handling task properly. More information is required about the status of all equipment pieces involved and about the status of the entire handling task. Viewing for remote handling applications needs to include the display of such status information in a suitable form

  2. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  3. INTRODUCING NOVEL GENERATION OF HIGH ACCURACY CAMERA OPTICAL-TESTING AND CALIBRATION TEST-STANDS FEASIBLE FOR SERIES PRODUCTION OF CAMERAS

    Directory of Open Access Journals (Sweden)

    M. Nekouei Shahraki

    2015-12-01

    Full Text Available The recent advances in the field of computer-vision have opened the doors of many opportunities for taking advantage of these techniques and technologies in many fields and applications. Having a high demand for these systems in today and future vehicles implies a high production volume of video cameras. The above criterions imply that it is critical to design test systems which deliver fast and accurate calibration and optical-testing capabilities. In this paper we introduce new generation of test-stands delivering high calibration quality in single-shot calibration of fisheye surround-view cameras. This incorporates important geometric features from bundle-block calibration, delivers very high (sub-pixel calibration accuracy, makes possible a very fast calibration procedure (few seconds, and realizes autonomous calibration via machines. We have used the geometrical shape of a Spherical Helix (Type: 3D Spherical Spiral with special geometrical characteristics, having a uniform radius which corresponds to the uniform motion. This geometrical feature was mechanically realized using three dimensional truncated icosahedrons which practically allow the implementation of a spherical helix on multiple surfaces. Furthermore the test-stand enables us to perform many other important optical tests such as stray-light testing, enabling us to evaluate the certain qualities of the camera optical module.

  4. A simple data loss model for positron camera systems

    International Nuclear Information System (INIS)

    Eriksson, L.; Dahlbom, M.

    1994-01-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system

  5. A multi-criteria approach to camera motion design for volume data animation.

    Science.gov (United States)

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  6. Intraoperative Scintigraphy Using a Large Field-of-View Portable Gamma Camera for Primary Hyperparathyroidism: Initial Experience

    Directory of Open Access Journals (Sweden)

    Nathan C. Hall

    2015-01-01

    Full Text Available Background. We investigated a novel technique, intraoperative 99 mTc-Sestamibi (MIBI imaging (neck and excised specimen (ES, using a large field-of-view portable gamma camera (LFOVGC, for expediting confirmation of MIBI-avid parathyroid adenoma removal. Methods. Twenty patients with MIBI-avid parathyroid adenomas were preoperatively administered MIBI and intraoperatively imaged prior to incision (neck and immediately following resection (neck and/or ES. Preoperative and intraoperative serum parathyroid hormone monitoring (IOPTH and pathology (path were also performed. Results. MIBI neck activity was absent and specimen activity was present in 13/20 with imaging after initial ES removal. In the remaining 7/20 cases, residual neck activity and/or absent ES activity prompted excision of additional tissue, ultimately leading to complete hyperfunctioning tissue excision. Postexcision LFOVGC ES imaging confirmed parathyroid adenoma resection 100% when postresection imaging qualitatively had activity (ES and/or no activity (neck. The mean ± SEM time saving using intraoperative LFOVGC data to confirm resection versus first IOPTH or path result would have been 22.0 ± 2 minutes (specimen imaging and 26.0 ± 3 minutes (neck imaging. Conclusion. Utilization of a novel real-time intraoperative LFOVGC imaging approach can provide confirmation of MIBI-avid parathyroid adenoma removal appreciably faster than IOPTH and/or path and may provide a valuable adjunct to parathyroid surgery.

  7. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    Science.gov (United States)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain

  8. Performance benefits and limitations of a camera network

    Science.gov (United States)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  9. A wide angle view imaging diagnostic with all reflective, in-vessel optics at JET

    Energy Technology Data Exchange (ETDEWEB)

    Clever, M. [Institute of Energy and Climate Research – Plasma Physics, Forschungszentrum Jülich GmbH, Association EURATOM-FZJ, 52425 Jülich (Germany); Arnoux, G.; Balshaw, N. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Garcia-Sanchez, P. [Laboratorio Nacional de Fusion, Asociacion EURATOM-CIEMAT, Madrid (Spain); Patel, K. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Sergienko, G. [Institute of Energy and Climate Research – Plasma Physics, Forschungszentrum Jülich GmbH, Association EURATOM-FZJ, 52425 Jülich (Germany); Soler, D. [Winlight System, 135 rue Benjamin Franklin, ZA Saint Martin, F-84120 Pertuis (France); Stamp, M.F.; Williams, J.; Zastrow, K.-D. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom)

    2013-10-15

    Highlights: ► A new wide angle view camera system has been installed at JET. ► The system helps to protect the ITER-like wall plasma facing components from damage. ► The coverage of the vessel by camera observation systems was increased. ► The system comprises an in-vessel part with parabolic and flat mirrors. ► The required image quality for plasma monitoring and wall protection was delivered. -- Abstract: A new wide angle view camera system has been installed at JET in preparation for the ITER-like wall campaigns. It considerably increases the coverage of the vessel by camera observation systems and thereby helps to protect the – compared to carbon – more fragile plasma facing components from damage. The system comprises an in-vessel part with parabolic and flat mirrors and an ex-vessel part with beam splitters, lenses and cameras. The system delivered the image quality required for plasma monitoring and wall protection.

  10. Development and evaluation of a portable CZT coded aperture gamma-camera

    Energy Technology Data Exchange (ETDEWEB)

    Montemont, G.; Monnet, O.; Stanchina, S.; Maingault, L.; Verger, L. [CEA, LETI, Minatec Campus, Univ. Grenoble Alpes, 38054 Grenoble, (France); Carrel, F.; Lemaire, H.; Schoepff, V. [CEA, LIST, 91191 Gif-sur-Yvette, (France); Ferrand, G.; Lalleman, A.-S. [CEA, DAM, DIF, 91297 Arpajon, (France)

    2015-07-01

    We present the design and the evaluation of a CdZnTe (CZT) based gamma camera using a coded aperture mask. This camera, based on a 8 cm{sup 3} detection module, is small enough to be portable and battery-powered (4 kg weight and 4 W power dissipation). As the detector has spectral capabilities, the gamma camera allows isotope identification and colored imaging, by affecting one color channel to each identified isotope. As all data processing is done at real time, the user can directly observe the outcome of an acquisition and can immediately react to what he sees. We first present the architecture of the system, how the detector works, and its performances. After, we focus on the imaging technique used and its strengths and limitations. Finally, results concerning sensitivity, spatial resolution, field of view and multi-isotope imaging are shown and discussed. (authors)

  11. Development and evaluation of a portable CZT coded aperture gamma-camera

    International Nuclear Information System (INIS)

    Montemont, G.; Monnet, O.; Stanchina, S.; Maingault, L.; Verger, L.; Carrel, F.; Lemaire, H.; Schoepff, V.; Ferrand, G.; Lalleman, A.-S.

    2015-01-01

    We present the design and the evaluation of a CdZnTe (CZT) based gamma camera using a coded aperture mask. This camera, based on a 8 cm 3 detection module, is small enough to be portable and battery-powered (4 kg weight and 4 W power dissipation). As the detector has spectral capabilities, the gamma camera allows isotope identification and colored imaging, by affecting one color channel to each identified isotope. As all data processing is done at real time, the user can directly observe the outcome of an acquisition and can immediately react to what he sees. We first present the architecture of the system, how the detector works, and its performances. After, we focus on the imaging technique used and its strengths and limitations. Finally, results concerning sensitivity, spatial resolution, field of view and multi-isotope imaging are shown and discussed. (authors)

  12. Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)

    Science.gov (United States)

    Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph

    2015-01-01

    The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.

  13. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  14. Integrating multi-view transmission system into MPEG-21 stereoscopic and multi-view DIA (digital item adaptation)

    Science.gov (United States)

    Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran

    2006-10-01

    As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To

  15. Heading-vector navigation based on head-direction cells and path integration.

    Science.gov (United States)

    Kubie, John L; Fenton, André A

    2009-05-01

    Insect navigation is guided by heading vectors that are computed by path integration. Mammalian navigation models, on the other hand, are typically based on map-like place representations provided by hippocampal place cells. Such models compute optimal routes as a continuous series of locations that connect the current location to a goal. We propose a "heading-vector" model in which head-direction cells or their derivatives serve both as key elements in constructing the optimal route and as the straight-line guidance during route execution. The model is based on a memory structure termed the "shortcut matrix," which is constructed during the initial exploration of an environment when a set of shortcut vectors between sequential pairs of visited waypoint locations is stored. A mechanism is proposed for calculating and storing these vectors that relies on a hypothesized cell type termed an "accumulating head-direction cell." Following exploration, shortcut vectors connecting all pairs of waypoint locations are computed by vector arithmetic and stored in the shortcut matrix. On re-entry, when local view or place representations query the shortcut matrix with a current waypoint and goal, a shortcut trajectory is retrieved. Since the trajectory direction is in head-direction compass coordinates, navigation is accomplished by tracking the firing of head-direction cells that are tuned to the heading angle. Section 1 of the manuscript describes the properties of accumulating head-direction cells. It then shows how accumulating head-direction cells can store local vectors and perform vector arithmetic to perform path-integration-based homing. Section 2 describes the construction and use of the shortcut matrix for computing direct paths between any pair of locations that have been registered in the shortcut matrix. In the discussion, we analyze the advantages of heading-based navigation over map-based navigation. Finally, we survey behavioral evidence that nonhippocampal

  16. Metamerism in cephalochordates and the problem of the vertebrate head.

    Science.gov (United States)

    Onai, Takayuki; Adachi, Noritaka; Kuratani, Shigeru

    2017-01-01

    The vertebrate head characteristically exhibits a complex pattern with sense organs, brain, paired eyes and jaw muscles, and the brain case is not found in other chordates. How the extant vertebrate head has evolved remains enigmatic. Historically, there have been two conflicting views on the origin of the vertebrate head, segmental and non-segmental views. According to the segmentalists, the vertebrate head is organized as a metameric structure composed of segments equivalent to those in the trunk; a metamere in the vertebrate head was assumed to consist of a somite, a branchial arch and a set of cranial nerves, considering that the head evolved from rostral segments of amphioxus-like ancestral vertebrates. Non-segmentalists, however, considered that the vertebrate head was not segmental. In that case, the ancestral state of the vertebrate head may be non-segmented, and rostral segments in amphioxus might have been secondarily gained, or extant vertebrates might have evolved through radical modifications of amphioxus-like ancestral vertebrate head. Comparative studies of mesodermal development in amphioxus and vertebrate gastrula embryos have revealed that mesodermal gene expressions become segregated into two domains anteroposteriorly to specify the head mesoderm and trunk mesoderm only in vertebrates; in this segregation, key genes such as delta and hairy, involved in segment formation, are expressed in the trunk mesoderm, but not in the head mesoderm, strongly suggesting that the head mesoderm of extant vertebrates is not segmented. Taken together, the above finding possibly adds a new insight into the origin of the vertebrate head; the vertebrate head mesoderm would have evolved through an anteroposterior polarization of the paraxial mesoderm if the ancestral vertebrate had been amphioxus-like.

  17. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging.

    Science.gov (United States)

    Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P

    2016-02-01

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.

  18. View-Dependent Adaptive Cloth Simulation with Buckling Compensation.

    Science.gov (United States)

    Koh, Woojong; Narain, Rahul; O'Brien, James F

    2015-10-01

    This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening, while locking in extremely coarsened regions is inhibited by modifying the material model to compensate for unresolved sub-element buckling. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows. The approach produces a 2× speed-up for a single character and more than 4× for a small group as compared to view-independent adaptive simulations, and respectively 5× and 9× speed-ups as compared to non-adaptive simulations.

  19. Chryse 'Alien Head'

    Science.gov (United States)

    2005-01-01

    26 January 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an impact crater in Chryse Planitia, not too far from the Viking 1 lander site, that to seems to resemble a bug-eyed head. The two odd depressions at the north end of the crater (the 'eyes') may have formed by wind or water erosion. This region has been modified by both processes, with water action occurring in the distant past via floods that poured across western Chryse Planitia from Maja Valles, and wind action common occurrence in more recent history. This crater is located near 22.5oN, 47.9oW. The 150 meter scale bar is about 164 yards long. Sunlight illuminates the scene from the left/lower left.

  20. A study on obstacle detection method of the frontal view using a camera on highway

    Science.gov (United States)

    Nguyen, Van-Quang; Park, Jeonghyeon; Seo, Changjun; Kim, Heungseob; Boo, Kwangsuck

    2018-03-01

    In this work, we introduce an approach to detect vehicles for driver assistance, or warning system. For driver assistance system, it must detect both lanes (left and right side lane), and discover vehicles ahead of the test vehicle. Therefore, in this study, we use a camera, it is installed on the windscreen of the test vehicle. Images from the camera are used to detect three lanes, and detect multiple vehicles. In lane detection, line detection and vanishing point estimation are used. For the vehicle detection, we combine the horizontal and vertical edge detection, the horizontal edge is used to detect the vehicle candidates, and then the vertical edge detection is used to verify the vehicle candidates. The proposed algorithm works with of 480 × 640 image frame resolution. The system was tested on the highway in Korea.

  1. Fusion of Range and Intensity Information for View Invariant Gesture Recognition

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.; Fihl, Preben

    2008-01-01

    This paper presents a system for view invariant gesture recognition. The approach is based on 3D data from a CSEM SwissRanger SR-2 camera. This camera produces both a depth map as well as an intensity image of a scene. Since the two information types are aligned, we can use the intensity image...

  2. Be Foil ''Filter Knee Imaging'' NSTX Plasma with Fast Soft X-ray Camera

    International Nuclear Information System (INIS)

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-01-01

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28 o ) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip

  3. Quantitative assessment of optic nerve head pallor

    International Nuclear Information System (INIS)

    Vilser, W; Seifert, B U; Riemer, T; Nagel, E; Weisensee, J; Hammer, M

    2008-01-01

    Ischaemia, loss of neural tissue, glial cell activation and tissue remodelling are symptoms of anterior ischaemic as well as glaucomatous optic neuropathy leading to pallor of the optic nerve head. Here, we describe a simple method for the pallor measurement using a fundus camera equipped with a colour CCD camera and a special dual bandpass filter. The reproducibility of the determined mean pallor value was 11.7% (coefficient of variation for repeated measurements in the same subject); the variation over six healthy subjects was 14.8%. A significant difference between the mean pallor of an atrophic disc and that of the contralateral eye of the same individual was found. However, even the clinically unaffected eye showed a significantly increased pallor compared to the mean of the healthy control group. Thus, optic disc pallor measurement, as described here, may be helpful in the early detection and follow-up of optic neuropathy

  4. Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.

  5. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    International Nuclear Information System (INIS)

    Ogunmolu, O; Gans, N; Jiang, S; Gu, X

    2015-01-01

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance of the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control

  6. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    Energy Technology Data Exchange (ETDEWEB)

    Ogunmolu, O; Gans, N [The University of Texas at Dallas, Richardson, TX (United States); Jiang, S; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance of the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control

  7. Towards a better understanding of the overall health impact of the game of squash: automatic and high-resolution motion analysis from a single camera view

    Directory of Open Access Journals (Sweden)

    Brumann Christopher

    2017-09-01

    Full Text Available In this paper, we present a method for locating and tracking players in the game of squash using Gaussian mixture model background subtraction and agglomerative contour clustering from a calibrated single camera view. Furthermore, we describe a method for player re-identification after near total occlusion, based on stored color- and region-descriptors. For camera calibration, no additional pattern is needed, as the squash court itself can serve as a 3D calibration object. In order to exclude non-rally situations from motion analysis, we further classify each video frame into game phases using a multilayer perceptron. By considering a player’s position as well as the current game phase we are able to visualize player-individual motion patterns expressed as court coverage using pseudo colored heat-maps. In total, we analyzed two matches (six games, 1:28h of high quality commercial videos used in sports broadcasting and compute high resolution (1cm per pixel heat-maps. 130184 manually labeled frames (game phases and player identification show an identification correctness of 79.28±8.99% (mean±std. Game phase classification is correct in 60.87±7.62% and the heat-map visualization correctness is 72.47±7.27%.

  8. Head-mounted display for use in functional endoscopic sinus surgery

    Science.gov (United States)

    Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.

    1995-05-01

    Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.

  9. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Sven Fleck

    2006-12-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  10. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Fleck Sven

    2007-01-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  11. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  12. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    International Nuclear Information System (INIS)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S

    2016-01-01

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm"3) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  13. The PETRRA positron camera: design, characterization and results of a physical evaluation

    International Nuclear Information System (INIS)

    Divoli, A; Flower, M A; Erlandsson, K; Reader, A J; Evans, N; Meriaux, S; Ott, R J; Stephenson, R; Bateman, J E; Duxbury, D M; Spill, E J

    2005-01-01

    The PETRRA positron camera is a large-area (600 mm x 400 mm sensitive area) prototype system that has been developed through a collaboration between the Rutherford Appleton Laboratory and the Institute of Cancer Research/Royal Marsden Hospital. The camera uses novel technology involving the coupling of 10 mm thick barium fluoride scintillating crystals to multi-wire proportional chambers filled with a photosensitive gas. The performance of the camera is reported here and shows that the present system has a 3D spatial resolution of ∼7.5 mm full-width-half-maximum (FWHM), a timing resolution of ∼3.5 ns (FWHM), a total coincidence count-rate performance of at least 80-90 kcps and a randoms-corrected sensitivity of ∼8-10 kcps kBq -1 ml. For an average concentration of 3 kBq ml -1 as expected in a patient it is shown that, for the present prototype, ∼20% of the data would be true events. The count-rate performance is presently limited by the obsolete off-camera read-out electronics and computer system and the sensitivity by the use of thin (10 mm thick) crystals. The prototype camera has limited scatter rejection and no intrinsic shielding and is, therefore, susceptible to high levels of scatter and out-of-field activity when imaging patients. All these factors are being addressed to improve the performance of the camera. The large axial field-of-view of 400 mm makes the camera ideally suited to whole-body PET imaging. We present examples of preliminary clinical images taken with the prototype camera. Overall, the results show the potential for this alternative technology justifying further development

  14. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  15. Pseudo real-time coded aperture imaging system with intensified vidicon cameras

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.

    1977-01-01

    A coded image displayed on a TV monitor was used to directly reconstruct a decoded image. Both the coded and the decoded images were viewed with intensified vidicon cameras. The coded aperture was a 15-element nonredundant pinhole array. The coding and decoding were accomplished simultaneously during the scanning of a single 16-msec TV frame

  16. Calibration method for projector-camera-based telecentric fringe projection profilometry system.

    Science.gov (United States)

    Liu, Haibo; Lin, Huijing; Yao, Linshen

    2017-12-11

    By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

  17. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    possess versatile and unique readout capabilities that have established their utility in scientific and especially for radiation field applications. A detector for neutron radiography based on a cooled CID camera offers some capabilities, as follows: - Extended linear dynamic range up to 109 without blooming or streaking; - Arbitrary pixel selection and nondestructive readout makes it possible to introduce a high degree of exposure control to low-light viewing of static scenes; - Read multiple areas of interest of an image within a given frame at higher rates; - Wide spectral response (185 nm - 1100 nm); - CIDs tolerate high radiation environments up to 3 Mrad integrated dose; - The contiguous pixel structure of CID arrays contributes to accurate imaging because there are virtually no opaque areas between pixels. (author)

  18. Experience of in-cell visual inspection using CCD camera in hot cell of Reprocessing Plant

    International Nuclear Information System (INIS)

    Reddy, Padi Srinivas; Amudhu Ramesh Kumar, R.; Geo Mathews, M.; Ravisankar, A.

    2013-01-01

    This paper describes the selection, customization and operating experience of the visual inspection system for the hot cell of a Reprocessing Plant. For process equipment such as fuel chopping machine, dissolver, centrifuge, centrifugal extractors etc., viewing of operations and maintenance using manipulators is required. For this, the service of in-cell camera is essential. The ambience of the hot cell of Compact facility for Reprocessing of Advanced fuels in Lead cell (CORAL) for the reprocessing of fast reactor spent fuel has high gamma radiation and acidic vapors. Black and white Charge Coupled Device (CCD) camera has been used in CORAL incorporating in-house modifications to suit the operating ambient conditions, thereby extending the operating life of the camera. (author)

  19. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    Science.gov (United States)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  20. An ordinary camera in an extraordinary location: Outreach with the Mars Webcam

    Science.gov (United States)

    Ormston, T.; Denis, M.; Scuka, D.; Griebel, H.

    2011-09-01

    The European Space Agency's Mars Express mission was launched in 2003 and was Europe's first mission to Mars. On-board was a small camera designed to provide ‘visual telemetry’ of the separation of the Beagle-2 lander. After achieving its goal it was shut down while the primary science mission of Mars Express got underway. In 2007 this camera was reactivated by the flight control team of Mars Express for the purpose of providing public education and outreach—turning it into the ‘Mars Webcam’.The camera is a small, 640×480 pixel colour CMOS camera with a wide-angle 30°×40° field of view. This makes it very similar in almost every way to the average home PC webcam. The major difference is that this webcam is not in an average location but is instead in orbit around Mars. On a strict basis of non-interference with the primary science activities, the camera is turned on to provide unique wide-angle views of the planet below.A highly automated process ensures that the observations are scheduled on the spacecraft and then uploaded to the internet as rapidly as possible. There is no intermediate stage, so that visitors to the Mars Webcam blog serve as ‘citizen scientists’. Full raw datasets and processing instructions are provided along with a mechanism to allow visitors to comment on the blog. Members of the public are encouraged to use this in either a personal or an educational context and work with the images. We then take their excellent work and showcase it back on the blog. We even apply techniques developed by them to improve the data and webcam experience for others.The accessibility and simplicity of the images also makes the data ideal for educational use, especially as educational projects can then be showcased on the site as inspiration for others. The oft-neglected target audience of space enthusiasts is also important as this allows them to participate as part of an interplanetary instrument team.This paper will cover the history of the

  1. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  2. The photothermal camera - a new non destructive inspection tool

    International Nuclear Information System (INIS)

    Piriou, M.

    2007-01-01

    The Photothermal Camera, developed by the Non-Destructive Inspection Department at AREVA NP's Technical Center, is a device created to replace penetrant testing, a method whose drawbacks include environmental pollutants, industrial complexity and potential operator exposure. We have already seen how the Photothermal Camera can work alongside or instead of conventional surface inspection techniques such as penetrant, magnetic particle or eddy currents. With it, users can detect without any surface contact ligament defects or openings measuring just a few microns on rough oxidized, machined or welded metal parts. It also enables them to work on geometrically varied surfaces, hot parts or insulating (dielectric) materials without interference from the magnetic properties of the inspected part. The Photothermal Camera method has already been used for in situ inspections of tube/plate welds on an intermediate heat exchanger of the Phenix fast reactor. It also replaced the penetrant method for weld inspections on the ITER vacuum chamber, for weld crack detection on vessel head adapter J-welds, and for detecting cracks brought on by heat crazing. What sets this innovative method apart from others is its ability to operate at distances of up to two meters from the inspected part, as well as its remote control functionality at distances of up to 15 meters (or more via Ethernet), and its emissions-free environmental cleanliness. These make it a true alternative to penetrant testing, to the benefit of operator and environmental protection. (author) [fr

  3. The Janus Head Article - How Much Terminology Theory Can Practical Terminology Management Use?

    Directory of Open Access Journals (Sweden)

    Petra Drewer

    2007-03-01

    Full Text Available The god Janus in Greek mythology was a two-faced god; each face had its own view of the world. Our idea behind the Janus Head article is to give you two different and maybe even contradicting views on a certain topic. This issue’s Janus Head Article, however, features not two but three different views on terminology work, as researchers, professionals and students (the professionals of tomorrow discuss “How Much Terminology Theory Can Practical Terminology Management Use?” at DaimlerChrysler AG.

  4. Catalogue of tooth brush head designs.

    Science.gov (United States)

    Voelker, Marsha A; Bayne, Stephen C; Liu, Ying; Walker, Mary P

    2013-06-01

    Manual toothbrushes (MTBs) and power toothbrushes (PTBs) are effective oral physiotherapy aids for plaque removal. End-rounded bristles are safer and reduce damage to oral tissues. Nylon bristles are more effective in plaque removal because the bristle is stiffer than natural bristles. In the last 10 years the number of options for MTBs and PTBs has expanded significantly and there is very little information providing a reference frame for the design characteristics of the heads. The present in vitro study characterized a variety of MTB and PTB heads to provide a reference library for other research comparisons which might be made. Various commercial MTB and PTB heads were used to characterize the following: bristle size, shape, diameter, number of tufts, number of bristles per tuft and surface characteristics. Photographs were collected from the side, at 45 degrees and the top of each toothbrush (TB) head using a scanning electron microscope and digital camera. Images were analyzed (Soft Imaging System) for bristle features and designs. One-way ANOVA (p ≤ 0.05) was performed to detect differences among TB types within MTB and PTB groups and between pooled values for MTB and PTB groups. There were significant differences (p ≤ 0.05) in toothbrush bristle diameter and bristle shape. In contrast, there were no significant differences between PTB vs. MTB in regards to bristle diameter, bristle count and tuft count. The results suggest that although there are wide variations in toothbrush head designs, significant differences were found only in relation to bristle diameter and shape.

  5. Researches on hazard avoidance cameras calibration of Lunar Rover

    Science.gov (United States)

    Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong

    2017-11-01

    Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.

  6. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  7. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    International Nuclear Information System (INIS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch.; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-01-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cmx15 cm detection matrix of 2304 CdTe detector elements, 2.83 mmx2.83 mmx2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an infarct

  8. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Agustín Ortega

    2014-07-01

    Full Text Available Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies between image coordinates and world points in the ground plane (walking areas to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME patio at the Universitat Politècnica de Catalunya (UPC.

  9. F-18-FDG-hybrid-camera-PET in patients with postoperative fever

    International Nuclear Information System (INIS)

    Meller, J.; Lehmann, K.; Siefker, U.; Meyer, I.; Altenvoerde, G.; Becker, W.; Sahlmann, C.O.; Schreiber, K.

    2002-01-01

    Aim: Evaluation of F-18-FDG-hybrid-camera-PET imaging in patients with undetermined postoperative fever (POF). Methods: Prospective study of 18 patients (9 women, 9 men; age 23-85 years) suffering from POF with 2-fluoro-2'-deoxyglucose (F-18-FDG) using a dual headed coincidence camera (DHCC). Surgery had been performed 5-94 days prior to our investigation. 13 of the 18 patients received antibiotic therapy during the time of evaluation. Ten (55%) had an infectious and eight (45%) a norr infectious cause of fever. Results: Increased F-18-FDG-uptake outside the surgical wound occurred in 13 regions (infection n = 11, malignancy n = 2). The sensitivity of F-18-FDG-hybrid-camera-PET in imaging infection in areas outside the surgical wound was 86% and the specificity 100%, respectively. Antibiotic therapy did not negatively influence the results of F-18-FDG-scanning. Increased F-18-FDG-uptake within the surgical wound was seen in 8 of 18 patients. The sensitivity of F-18-FDG-hybrid-camera-PET in imaging infection within the surgical wound was 100% and the specificty 56%, respectively. The interval between surgery and F-18-FDG-scanning was significantly shorter in patients with false positive results compared with patients showing true negative results (median 34 vs. 54 days; p = 0,038). Conclusion: In POF-Patients, F-18-FDG transaxial tomography performed with a F-18-FDG-hybrid-camera-PET is sensitive in the diagnosis of inflammation and malignant disease within and outside the surgical wound. Because of the accumulation of the tracer both in granulation tissue and infection, the specificity in detecting the focus of fever within the surgical wound is poor. (orig.) [de

  10. Development of LabVIEW Program for Lock-In Infrared Thermography

    International Nuclear Information System (INIS)

    Min, Tae Hoon; Na, Hyung Chul; Kim, Noh Yu

    2011-01-01

    A LabVIEW program has been developed together with simple infrared thermography(IRT) system to control the lock-in conditions of the system efficiently. The IR imaging software was designed to operate both of infrared camera and halogen lamp by synchronizing them with periodic sine signal based on thyristor(SCR) circuits. LabVIEW software was programmed to provide users with screen-menu functions by which it can change the period and energy of heat source, operate the camera to acquire image, and monitor the state of the system on the computer screen In experiment, lock-in IR image for a specimen with artificial hole defects was obtained by the developed IRT system and compared with optical image

  11. The Janus Head Article - How Much Terminology Theory Can Practical Terminology Management Use?

    Directory of Open Access Journals (Sweden)

    Petra Drewer

    2012-08-01

    Full Text Available The god Janus in Greek mythology was a two-faced god; each face had its own view of the world. Our idea behind the Janus Head article is to give you two different and maybe even contradicting views on a certain topic. This issue’s Janus Head Article, however, features not two but three different views on terminology work, as researchers, professionals and students (the professionals of tomorrow discuss “How Much Terminology Theory Can Practical Terminology Management Use?” at DaimlerChrysler AG. 

  12. Development in distributed beam-view

    International Nuclear Information System (INIS)

    Bhole, R.B.; Pal, Satbajit; Dasgupta, S.

    2003-01-01

    A computerized distributed beam-viewer has been developed using PC-add on image digitizer card plugged into a Pentium PC running Windows NT. Image Acquisition card (IMAQ-1408) from National Instruments is driven to digitise inputs from CCD cameras placed along the beam transport lines. The multiple clients situated across a switched Ethernet LAN, collects the data and displays beam-views on a desirable Window size. Only one privilege client at the control room has the selection facility of the channel (camera), whereas image display processing and storing facility are provided at all other clients end. The client server S/W written on Window SDK is implemented using Window Socket ver 2.0 library functions. (author)

  13. Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.

    Science.gov (United States)

    Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael

    2016-11-01

    To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.

  14. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  15. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  16. The effect of viewing a virtual environment through a head-mounted display on balance.

    Science.gov (United States)

    Robert, Maxime T; Ballaz, Laurent; Lemay, Martin

    2016-07-01

    In the next few years, several head-mounted displays (HMD) will be publicly released making virtual reality more accessible. HMD are expected to be widely popular at home for gaming but also in clinical settings, notably for training and rehabilitation. HMD can be used in both seated and standing positions; however, presently, the impact of HMD on balance remains largely unknown. It is therefore crucial to examine the impact of viewing a virtual environment through a HMD on standing balance. To compare static and dynamic balance in a virtual environment perceived through a HMD and the physical environment. The visual representation of the virtual environment was based on filmed image of the physical environment and was therefore highly similar. This is an observational study in healthy adults. No significant difference was observed between the two environments for static balance. However, dynamic balance was more perturbed in the virtual environment when compared to that of the physical environment. HMD should be used with caution because of its detrimental impact on dynamic balance. Sensorimotor conflict possibly explains the impact of HMD on balance. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Quantitative investigation of a novel small field of view hybrid gamma camera (HGC) capability for sentinel lymph node detection

    Science.gov (United States)

    Lees, John E; Bugby, Sarah L; Jambi, Layal K; Perkins, Alan C

    2016-01-01

    Objective: The hybrid gamma camera (HGC) has been developed to enhance the localization of radiopharmaceutical uptake in targeted tissues during surgical procedures such as sentinel lymph node (SLN) biopsy. To assess the capability of the HGC, a lymph node contrast (LNC) phantom was constructed to simulate medical scenarios of varying radioactivity concentrations and SLN size. Methods: The phantom was constructed using two clear acrylic glass plates. The SLNs were simulated by circular wells of diameters ranging from 10 to 2.5 mm (16 wells in total) in 1 plate. The second plate contains four larger rectangular wells to simulate tissue background activity surrounding the SLNs. The activity used to simulate each SLN ranged between 4 and 0.025 MBq. The activity concentration ratio between the background and the activity injected in the SLNs was 1 : 10. The LNC phantom was placed at different depths of scattering material ranging between 5 and 40 mm. The collimator-to-source distance was 120 mm. Image acquisition times ranged from 60 to 240 s. Results: Contrast-to-noise ratio analysis and full-width-at-half-maximum (FWHM) measurements of the simulated SLNs were carried out for the images obtained. Over the range of activities used, the HGC detected between 87.5 and 100% of the SLNs through 20 mm of scattering material and 75–93.75% of the SLNs through 40 mm of scattering material. The FWHM of the detected SLNs ranged between 11.93 and 14.70 mm. Conclusion: The HGC is capable of detecting low accumulation of activity in small SLNs, indicating its usefulness as an intraoperative imaging system during surgical SLN procedures. Advances in knowledge: This study investigates the capability of a novel small-field-of-view (SFOV) HGC to detect low activity uptake in small SLNs. The phantom and procedure described are inexpensive and could be easily replicated and applied to any SFOV camera, to provide a comparison between systems with clinically relevant

  18. Self-Adaptive Correction of Heading Direction in Stair Climbing for Tracked Mobile Robots Using Visual Servoing Approach

    Science.gov (United States)

    Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu

    2017-02-01

    In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.

  19. Digitized video subject positioning and surveillance system for PET

    International Nuclear Information System (INIS)

    Picard, Y.; Thompson, C.J.

    1995-01-01

    Head motion is a significant contribution to the degradation of image quality of Positron Emission Tomography (PET) studies. Images from different studies must also be realigned digitally to be correlated when the subject position has changed. These constraints could be eliminated if the subject's head position could be monitored accurately. The authors have developed a video camera-based surveillance system to monitor the head position and motion of subjects undergoing PET studies. The system consists of two CCD (charge-coupled device) cameras placed orthogonally such that both face and profile views of the subject's head are displayed side by side on an RGB video monitor. Digitized images overlay the live images in contrasting colors on the monitor. Such a system can be used to (1) position the subject in the field of view (FOV) by displaying the position of the scanner's slices on the monitor along with the current subject position, (2) monitor head motion and alert the operator of any motion during the study and (3) reposition the subject accurately for subsequent studies by displaying the previous position along with the current position in a contrasting color

  20. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  1. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  2. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  3. Amplified Head Rotation in Virtual Reality and the Effects on 3D Search, Training Transfer, and Spatial Orientation.

    Science.gov (United States)

    Ragan, Eric D; Scerbo, Siroberto; Bacim, Felipe; Bowman, Doug A

    2017-08-01

    Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user's body and awareness of the CAVE's physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.

  4. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    Directory of Open Access Journals (Sweden)

    Khalil M. Ahmad Yousef

    2017-10-01

    Full Text Available Extrinsic calibration of a camera and a 2D laser range finder (lidar sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  5. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    Science.gov (United States)

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  6. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  7. Head Rotation Detection in Marmoset Monkeys

    Science.gov (United States)

    Simhadri, Sravanthi

    Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical rotation of head movement in marmoset monkeys. Experiments were conducted in a sound-attenuated acoustic chamber. Head movement of marmoset monkey was studied under various auditory and visual stimulation conditions. With increasing complexity, these conditions are (1) idle, (2) sound-alone, (3) sound and visual signals, and (4) alert signal by opening and closing of the chamber door. All of these conditions were tested with either house light on or off. Infra-red camera with a frame rate of 90 Hz was used to capture of the head movement of monkeys. To assist the signal detection, two circular markers were attached to the top of monkey head. The data analysis used an image-based marker detection scheme. Images were processed using the Computation Vision Toolbox in Matlab. The markers and their positions were detected using blob detection techniques. Based on the frame-by-frame information of marker positions, the angular position, velocity and acceleration were extracted in horizontal and vertical planes. Adaptive Otsu Thresholding, Kalman filtering and bound setting for marker properties were used to overcome a number of challenges encountered during this analysis, such as finding image segmentation threshold, continuously tracking markers during large head movement, and false alarm detection. The results show that the blob detection method together with Kalman filtering yielded better performances than other image based techniques like optical flow and SURF features .The median of the maximal head turn in the horizontal plane was in the range of 20 to 70 degrees and the median of the maximal velocity in horizontal plane was in the range of a few hundreds of degrees per

  8. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  9. Enhancing swimming pool safety by the use of range-imaging cameras

    Science.gov (United States)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  10. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  11. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  12. A technique for the absolute measurement of activity using a gamma camera and computer

    International Nuclear Information System (INIS)

    Fleming, J.S.

    1979-01-01

    The quantity of activity of an isotope in an organ is of interest in gamma camera studies. There are problems in correcting the regional gamma camera counts for varying absorption in body tissue, particularly for thick organs. A description is given of a general method, based on anterior, posterior and lateral views. The method has been applied to liver 99 Tcsup(m) sulphur colloid imaging. Phantom measurements showed that the smallest error to be expected was 3.2%. In practice errors would be 5 to 10%, although lower errors would be associated with estimates of liver/spleen ratios. (U.K.)

  13. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    Science.gov (United States)

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  14. A novel plane method to the calibration of the thermal camera

    Science.gov (United States)

    Wang, Xunsi; Huang, Wei; Nie, Qiu-hua; Xu, Tiefeng; Dai, Shixun; Shen, Xiang; Cheng, Weihai

    2009-07-01

    This paper provides an up-to-date review of research efforts in thermal camera and target object recognition techniques based on two-dimensional (2D) images in the infrared (IR) spectra (8-12μm). From the geometric point of view, a special target plate was constructed with a radiation source of lamp excited that allows all of these devices to be calibrated geometrically along a radiance-based approach. The calibration theory and actual experimental procedures were described, then an automated measurement of the circle targets by image centroid algorithm. The key parameters of IR camera were calibrated out with 3 inner and 6 outer of Tsai model in thermal imaging. The subsequent data processing and analysis were then outlined. The 3D model from the successful calibration of a representative sample of the infrared array camera was presented and discussed. They provide much new and easy way to the geometric characteristics of these imagers that can be used in car-night-vision, medical, industrial, military, and environmental applications.

  15. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  16. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  17. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  18. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    Science.gov (United States)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  19. Melter viewing system for liquid-fed ceramic melters

    International Nuclear Information System (INIS)

    Westsik, J.H. Jr.; Brenden, B.B.

    1988-01-01

    Melter viewing systems are an integral component of the monitoring and control systems for liquid-fed ceramic melters. The Pacific Northwest Laboratory (PNL) has designed cameras for use with glass melters at PNL, the Hanford Waste Vitrification Plant (HWVP), and West Valley Demonstration Project (WVDP). This report is a compilation of these designs. Operating experiences with one camera designed for the PNL melter are discussed. A camera has been fabricated and tested on the High-Bay Ceramic Melter (HBCM) and the Pilot-Scale Ceramic Melter (PSCM) at PNL. The camera proved to be an effective tool for monitoring the cold cap formed as the feed pool developed on the molten glass surface and for observing the physical condition of the melter. Originally, the camera was built to operate using the visible light spectrum in the melter. It was later modified to operate using the infrared (ir) spectrum. In either configuration, the picture quality decreases as the size of the cold cap increases. Large cold caps cover the molten glass, reducing the amount of visible light and reducing the plenum temperatures below 600 0 C. This temperature corresponds to the lowest level of blackbody radiation to which the video tube is sensitive. The camera has been tested in melter environments for about 1900 h. The camera has withstood mechanical shocks and vibrations. The cooling system in the camera has proved effective in maintaining the optical and electronic components within acceptable temperature ranges. 10 refs., 15 figs

  20. Tackling the challenges of fully immersive head-mounted AR devices

    Science.gov (United States)

    Singer, Wolfgang; Hillenbrand, Matthias; Münz, Holger

    2017-11-01

    The optical requirements of fully immersive head mounted AR devices are inherently determined by the human visual system. The etendue of the visual system is large. As a consequence, the requirements for fully immersive head-mounted AR devices exceeds almost any high end optical system. Two promising solutions to achieve the large etendue and their challenges are discussed. Head-mounted augmented reality devices have been developed for decades - mostly for application within aircrafts and in combination with a heavy and bulky helmet. The established head-up displays for applications within automotive vehicles typically utilize similar techniques. Recently, there is the vision of eyeglasses with included augmentation, offering a large field of view, and being unobtrusively all-day wearable. There seems to be no simple solution to reach the functional performance requirements. Known technical solutions paths seem to be a dead-end, and some seem to offer promising perspectives, however with severe limitations. As an alternative, unobtrusively all-day wearable devices with a significantly smaller field of view are already possible.

  1. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  2. Camera-based speckle noise reduction for 3-D absolute shape measurements.

    Science.gov (United States)

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen; Fischer, Andreas

    2016-05-30

    Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore.

  3. The prototype cameras for trans-Neptunian automatic occultation survey

    Science.gov (United States)

    Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul

    2016-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.

  4. Optical identification of sea-mines - Gated viewing three-dimensional laser radar

    DEFF Research Database (Denmark)

    Busck, Jens

    2005-01-01

    A gated viewing high accuracy mono-static laser radar has been developed for the purpose of improving the optical underwater sea-mine identification handled by the Navy. In the final stage of the sea-mine detection, classification and identification process the Navy applies a remote operated...... vehicle for optical identification of the bottom seamine. The experimental results of the thesis indicate that replacing the conventional optical video and spotlight system applied by the Navy with the gated viewing two- and three-dimensional laser radar can improve the underwater optical sea...... of the short laser pulses (0.5 ns), the high laser pulse repetition rate (32.4 kHz), the fast gating camera (0.2 ns), the short camera delay steps (0.1 ns), the applied optical single mode fiber, and the applied algorithm for three-dimensional imaging. The gated viewing laser radar system configuration...

  5. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  6. A morphological comparison of the piriform sinuses in head-on and head-rotated views of seated subjects using cone-beam computed tomography

    International Nuclear Information System (INIS)

    Yamashina, Atsushi; Tanimoto, Keiji; Ohtsuka, Masahiko; Nagasaki, Toshikazu; Sutthiprapaporn, Pipop; Iida, Yukihiro; Katsumata, Akitoshi

    2008-01-01

    Food flow in the oropharynx changes when the head is rotated. The purpose of this study was to evaluate morphological differences in the upper and lower piriform sinuses in head-on (HO) versus head-rotated (HR) positions. Ten healthy adult volunteers with no previous history of dysphagia were subjected to cone-beam computed tomography (CBCT) in the HO and HR positions. Binary CBCT images were created at 50% gray scale to examine morphological changes in the lower piriform sinuses. Upon rotation to the right, the cross-sectional area of the left lower piriform sinus increased significantly (P=0.037). The depth of the right lower piriform sinus also increased significantly (P=0.011) upon rotation. The volume of the lower piriform sinuses increased significantly on both sides (right, P=0.009; left, P=0.013). The upper piriform sinuses acquired a teardrop shape, with the rotated side narrowed and opposite side enlarged. These results suggest that changes in food flow during head rotation result mainly from changes in the size and shape of the upper piriform sinuses. (author)

  7. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    Science.gov (United States)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  8. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  9. Collaborative 3D Target Tracking in Distributed Smart Camera Networks for Wide-Area Surveillance

    Directory of Open Access Journals (Sweden)

    Xenofon Koutsoukos

    2013-05-01

    Full Text Available With the evolution and fusion of wireless sensor network and embedded camera technologies, distributed smart camera networks have emerged as a new class of systems for wide-area surveillance applications. Wireless networks, however, introduce a number of constraints to the system that need to be considered, notably the communication bandwidth constraints. Existing approaches for target tracking using a camera network typically utilize target handover mechanisms between cameras, or combine results from 2D trackers in each camera into 3D target estimation. Such approaches suffer from scale selection, target rotation, and occlusion, drawbacks typically associated with 2D tracking. In this paper, we present an approach for tracking multiple targets directly in 3D space using a network of smart cameras. The approach employs multi-view histograms to characterize targets in 3D space using color and texture as the visual features. The visual features from each camera along with the target models are used in a probabilistic tracker to estimate the target state. We introduce four variations of our base tracker that incur different computational and communication costs on each node and result in different tracking accuracy. We demonstrate the effectiveness of our proposed trackers by comparing their performance to a 3D tracker that fuses the results of independent 2D trackers. We also present performance analysis of the base tracker along Quality-of-Service (QoS and Quality-of-Information (QoI metrics, and study QoS vs. QoI trade-offs between the proposed tracker variations. Finally, we demonstrate our tracker in a real-life scenario using a camera network deployed in a building.

  10. Time for a Change; Spirit's View on Sol 1843 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.' The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  11. Integrated head area design of KNGR to reduce refueling outage duration

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Woo Tae; Park, Chi Yong; Kim, In Hwan; Kim, Dae Woong [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In the design of KNGR (Korea Next Generation Reactor), we believe that economy is one of the most important factors to be considered. Thus, we reviewed and evaluated the consequences of designing the head area into an integrated package from an economical point of view. The refueling outage durations of the nuclear power plants currently in operation in Korea, some having and others not having integrated head package, are compared. This paper discusses the characteristics of head area design and the critical design issues of KNGR head area to evaluate the effect of the head area characteristics on the outage duration. 8 refs., 4 figs. (Author)

  12. Integrated head area design of KNGR to reduce refueling outage duration

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Woo Tae; Park, Chi Yong; Kim, In Hwan; Kim, Dae Woong [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    In the design of KNGR (Korea Next Generation Reactor), we believe that economy is one of the most important factors to be considered. Thus, we reviewed and evaluated the consequences of designing the head area into an integrated package from an economical point of view. The refueling outage durations of the nuclear power plants currently in operation in Korea, some having and others not having integrated head package, are compared. This paper discusses the characteristics of head area design and the critical design issues of KNGR head area to evaluate the effect of the head area characteristics on the outage duration. 8 refs., 4 figs. (Author)

  13. Slit-Slat Collimator Equipped Gamma Camera for Whole-Mouse SPECT-CT Imaging

    Science.gov (United States)

    Cao, Liji; Peter, Jörg

    2012-06-01

    A slit-slat collimator is developed for a gamma camera intended for small-animal imaging (mice). The tungsten housing of a roof-shaped collimator forms a slit opening, and the slats are made of lead foils separated by sparse polyurethane material. Alignment of the collimator with the camera's pixelated crystal is performed by adjusting a micrometer screw while monitoring a Co-57 point source for maximum signal intensity. For SPECT, the collimator forms a cylindrical field-of-view enabling whole mouse imaging with transaxial magnification and constant on-axis sensitivity over the entire axial direction. As the gamma camera is part of a multimodal imaging system incorporating also x-ray CT, five parameters corresponding to the geometric displacements of the collimator as well as to the mechanical co-alignment between the gamma camera and the CT subsystem are estimated by means of bimodal calibration sources. To illustrate the performance of the slit-slat collimator and to compare its performance to a single pinhole collimator, a Derenzo phantom study is performed. Transaxial resolution along the entire long axis is comparable to a pinhole collimator of same pinhole diameter. Axial resolution of the slit-slat collimator is comparable to that of a parallel beam collimator. Additionally, data from an in-vivo mouse study are presented.

  14. Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the view is toward the south-southwest. The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau. Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge. This view is presented as a cylindrical-perspective projection with geometric seam correction.

  15. "Head up and eyes out" advances in head mounted displays capabilities

    Science.gov (United States)

    Cameron, Alex

    2013-06-01

    There are a host of helmet and head mounted displays, flooding the market place with displays which provide what is essentially a mobile computer display. What sets aviators HMDs apart is that they provide the user with accurate conformal information embedded in the pilots real world view (see through display) where the information presented is intuitive and easy to use because it overlays the real world (mix of sensor imagery, symbolic information and synthetic imagery) and enables them to stay head up, eyes out, - improving their effectiveness, reducing workload and improving safety. Such systems are an enabling technology in the provision of enhanced Situation Awareness (SA) and reducing user workload in high intensity situations. Safety Is Key; so the addition of these HMD functions cannot detract from the aircrew protection functions of conventional aircrew helmets which also include life support and audio communications. These capabilities are finding much wider application in new types of compact man mounted audio/visual products enabled by the emergence of new families of micro displays, novel optical concepts and ultra-compact low power processing solutions. This papers attempts to capture the key drivers and needs for future head mounted systems for aviation applications.

  16. Unified framework for recognition, localization and mapping using wearable cameras.

    Science.gov (United States)

    Vázquez-Martín, Ricardo; Bandera, Antonio

    2012-08-01

    Monocular approaches to simultaneous localization and mapping (SLAM) have recently addressed with success the challenging problem of the fast computation of dense reconstructions from a single, moving camera. Thus, if these approaches initially relied on the detection of a reduced set of interest points to estimate the camera position and the map, they are currently able to reconstruct dense maps from a handheld camera while the camera coordinates are simultaneously computed. However, these maps of 3-dimensional points usually remain meaningless, that is, with no memorable items and without providing a way of encoding spatial relationships between objects and paths. In humans and mobile robotics, landmarks play a key role in the internalization of a spatial representation of an environment. They are memorable cues that can serve to define a region of the space or the location of other objects. In a topological representation of the space, landmarks can be identified and located according to its structural, perceptive or semantic significance and distinctiveness. But on the other hand, landmarks may be difficult to be located in a metric representation of the space. Restricted to the domain of visual landmarks, this work describes an approach where the map resulting from a point-based, monocular SLAM is annotated with the semantic information provided by a set of distinguished landmarks. Both features are obtained from the image. Hence, they can be linked by associating to each landmark all those point-based features that are superimposed to the landmark in a given image (key-frame). Visual landmarks will be obtained by means of an object-based, bottom-up attention mechanism, which will extract from the image a set of proto-objects. These proto-objects could not be always associated with natural objects, but they will typically constitute significant parts of these scene objects and can be appropriately annotated with semantic information. Moreover, they will be

  17. Investigation of high resolution compact gamma camera module based on a continuous scintillation crystal using a novel charge division readout method

    International Nuclear Information System (INIS)

    Dai Qiusheng; Zhao Cuilan; Qi Yujin; Zhang Hualin

    2010-01-01

    The objective of this study is to investigate a high performance and lower cost compact gamma camera module for a multi-head small animal SPECT system. A compact camera module was developed using a thin Lutetium Oxyorthosilicate (LSO) scintillation crystal slice coupled to a Hamamatsu H8500 position sensitive photomultiplier tube (PSPMT). A two-stage charge division readout board based on a novel subtractive resistive readout with a truncated center-of-gravity (TCOG) positioning method was developed for the camera. The performance of the camera was evaluated using a flood 99m Tc source with a four-quadrant bar-mask phantom. The preliminary experimental results show that the image shrinkage problem associated with the conventional resistive readout can be effectively overcome by the novel subtractive resistive readout with an appropriate fraction subtraction factor. The response output area (ROA) of the camera shown in the flood image was improved up to 34%, and an intrinsic spatial resolution better than 2 mm of detector was achieved. In conclusion, the utilization of a continuous scintillation crystal and a flat-panel PSPMT equipped with a novel subtractive resistive readout is a feasible approach for developing a high performance and lower cost compact gamma camera. (authors)

  18. A technique for automatically extracting useful field of view and central field of view images.

    Science.gov (United States)

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.

  19. A technique for automatically extracting useful field of view and central field of view images

    International Nuclear Information System (INIS)

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints

  20. Penguin head movement detected using small accelerometers: a proxy of prey encounter rate.

    Science.gov (United States)

    Kokubun, Nobuo; Kim, Jeong-Hoon; Shin, Hyoung-Chul; Naito, Yasuhiko; Takahashi, Akinori

    2011-11-15

    Determining temporal and spatial variation in feeding rates is essential for understanding the relationship between habitat features and the foraging behavior of top predators. In this study we examined the utility of head movement as a proxy of prey encounter rates in medium-sized Antarctic penguins, under the presumption that the birds should move their heads actively when they encounter and peck prey. A field study of free-ranging chinstrap and gentoo penguins was conducted at King George Island, Antarctica. Head movement was recorded using small accelerometers attached to the head, with simultaneous monitoring for prey encounter or body angle. The main prey was Antarctic krill (>99% in wet mass) for both species. Penguin head movement coincided with a slow change in body angle during dives. Active head movements were extracted using a high-pass filter (5 Hz acceleration signals) and the remaining acceleration peaks (higher than a threshold acceleration of 1.0 g) were counted. The timing of head movements coincided well with images of prey taken from the back-mounted cameras: head movement was recorded within ±2.5 s of a prey image on 89.1±16.1% (N=7 trips) of images. The number of head movements varied largely among dive bouts, suggesting large temporal variations in prey encounter rates. Our results show that head movement is an effective proxy of prey encounter, and we suggest that the method will be widely applicable for a variety of predators.

  1. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  2. Contact-free trans-pars-planar illumination enables snapshot fundus camera for nonmydriatic wide field photography.

    Science.gov (United States)

    Wang, Benquan; Toslak, Devrim; Alam, Minhaj Nur; Chan, R V Paul; Yao, Xincheng

    2018-06-08

    In conventional fundus photography, trans-pupillary illumination delivers illuminating light to the interior of the eye through the peripheral area of the pupil, and only the central part of the pupil can be used for collecting imaging light. Therefore, the field of view of conventional fundus cameras is limited, and pupil dilation is required for evaluating the retinal periphery which is frequently affected by diabetic retinopathy (DR), retinopathy of prematurity (ROP), and other chorioretinal conditions. We report here a nonmydriatic wide field fundus camera employing trans-pars-planar illumination which delivers illuminating light through the pars plana, an area outside of the pupil. Trans-pars-planar illumination frees the entire pupil for imaging purpose only, and thus wide field fundus photography can be readily achieved with less pupil dilation. For proof-of-concept testing, using all off-the-shelf components a prototype instrument that can achieve 90° fundus view coverage in single-shot fundus images, without the need of pharmacologic pupil dilation was demonstrated.

  3. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  4. A Summer View of Russia's Lena Delta and Olenek

    Science.gov (United States)

    2004-01-01

    These views of the Russian Arctic were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument on July 11, 2004, when the brief arctic summer had transformed the frozen tundra and the thousands of lakes, channels, and rivers of the Lena Delta into a fertile wetland, and when the usual blanket of thick snow had melted from the vast plains and taiga forests. This set of three images cover an area in the northern part of the Eastern Siberian Sakha Republic. The Olenek River wends northeast from the bottom of the images to the upper left, and the top portions of the images are dominated by the delta into which the mighty Lena River empties when it reaches the Laptev Sea. At left is a natural color image from MISR's nadir (vertical-viewing) camera, in which the rivers appear murky due to the presence of sediment, and photosynthetically-active vegetation appears green. The center image is also from MISR's nadir camera, but is a false color view in which the predominant red color is due to the brightness of vegetation at near-infrared wavelengths. The most photosynthetically active parts of this area are the Lena Delta, in the lower half of the image, and throughout the great stretch of land that curves across the Olenek River and extends northeast beyond the relatively barren ranges of the Volyoi mountains (the pale tan-colored area to the right of image center). The right-hand image is a multi-angle false-color view made from the red band data of the 60o backward, nadir, and 60o forward cameras, displayed as red, green and blue, respectively. Water appears blue in this image because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. Much of the landscape and many low clouds appear purple since these surfaces are both forward and backward scattering, and clouds that are further from the surface appear in a different spot for each view angle, creating a rainbow-like appearance. However, the vegetated region that is

  5. The NIKA2 large-field-of-view millimetre continuum camera for the 30 m IRAM telescope

    Science.gov (United States)

    Adam, R.; Adane, A.; Ade, P. A. R.; André, P.; Andrianasolo, A.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Bracco, A.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; De Petris, M.; Désert, F.-X.; Doyle, S.; Driessen, E. F. C.; Evans, R.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Leggeri, J.-P.; Lestrade, J.-F.; Macías-Pérez, J. F.; Mauskopf, P.; Mayet, F.; Maury, A.; Monfardini, A.; Navarro, S.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Rigby, A.; Ritacco, A.; Romero, C.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2018-01-01

    Context. Millimetre-wave continuum astronomy is today an indispensable tool for both general astrophysics studies (e.g. star formation, nearby galaxies) and cosmology (e.g. cosmic microwave background and high-redshift galaxies). General purpose, large-field-of-view instruments are needed to map the sky at intermediate angular scales not accessible by the high-resolution interferometers (e.g. ALMA in Chile, NOEMA in the French Alps) and by the coarse angular resolution space-borne or ground-based surveys (e.g. Planck, ACT, SPT). These instruments have to be installed at the focal plane of the largest single-dish telescopes, which are placed at high altitude on selected dry observing sites. In this context, we have constructed and deployed a three-thousand-pixel dual-band (150 GHz and 260 GHz, respectively 2 mm and 1.15 mm wavelengths) camera to image an instantaneous circular field-of-view of 6.5 arcmin in diameter, and configurable to map the linear polarisation at 260 GHz. Aims: First, we are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focussing on the cryogenics, optics, focal plane arrays based on Kinetic Inductance Detectors, and the readout electronics. The focal planes and part of the optics are cooled down to the nominal 150 mK operating temperature by means of an adhoc dilution refrigerator. Secondly, we are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-m IRAM telescope at Pico Veleta, near Granada (Spain). Methods: We have targeted a number of astronomical sources. Starting from beam-maps on primary and secondary calibrators we have then gone to extended sources and faint objects. Both internal (electronic) and on-the-sky calibrations are applied. The general methods are described in the present paper. Results: NIKA2 has been successfully deployed and commissioned, performing in-line with expectations. In

  6. COLIBRI: partial camera readout and sliding trigger for the Cherenkov Telescope Array CTA

    International Nuclear Information System (INIS)

    Naumann, C L; Tejedor, L A; Martínez, G

    2013-01-01

    Plans for the future Cherenkov telescope array CTA include replacing the monolithic camera designs used in H.E.S.S. and MAGIC-I by one that is built up from a number of identical segments. These so-called clusters will be relatively autonomous, each containing its own triggering and readout hardware. While this choice was made for reasons of flexibility and ease of manufacture and maintenance, such a concept with semi-independent sub-units lends itself quite naturally to the possibility of new, and more flexible, readout modes. In all previously-used concepts, triggering and readout of the camera is centralised, with a single camera trigger per event that starts the readout of all pixels in the camera at the same time and within the same integration time window. The limitations of such a trigger system can reduce the performance of a large array such as CTA, due to the huge amount of useless data created by night-sky background if trigger thresholds are set low enough to achieve the desired 20 GeV energy threshold, and to image losses at high energies due to the rigid readout window. In this study, an alternative concept (''COLIBRI'' = Concept for an Optimised Local Image Building and Readout Infrastructure) is presented, where only those parts of the camera which are likely to actually contain image data (usually a small percentage of the total pixels) are read out. This leads to a significant reduction of the expected data rate and the dead-times incurred in the camera. Furthermore, the quasi-independence of the individual clusters can be used to read different parts of the camera at slightly different times, thus allowing the readout to follow the slow development of the shower image across the camera field of view. This concept of flexible, partial camera readout is presented in the following, together with a description of Monte-Carlo studies performed to evaluate its performance as well as a hardware implementation proposed for CTA.

  7. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  8. Breast Imaging Utilizing Dedicated Gamma Camera and (99m)Tc-MIBI: Experience at the Tel Aviv Medical Center and Review of the Literature Breast Imaging.

    Science.gov (United States)

    Even-Sapir, Einat; Golan, Orit; Menes, Tehillah; Weinstein, Yuliana; Lerman, Hedva

    2016-07-01

    The scope of the current article is the clinical role of gamma cameras dedicated for breast imaging and (99m)Tc-MIBI tumor-seeking tracer, as both a screening modality among a healthy population and as a diagnostic modality in patients with breast cancer. Such cameras are now commercially available. The technology utilizing a camera composed of a NaI (Tl) detector is termed breast-specific gamma imaging. The technology of dual-headed camera composed of semiconductor cadmium zinc telluride detectors that directly converts gamma-ray energy into electronic signals is termed molecular breast imaging. Molecular breast imaging system has been installed at the Department of Nuclear medicine at the Tel Aviv Sourasky Medical Center, Tel Aviv in 2009. The article reviews the literature well as our own experience. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  10. Evaluation of a video-based head motion tracking system for dedicated brain PET

    Science.gov (United States)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  11. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  12. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  13. Rapid objective measurement of gamma camera resolution using statistical moments.

    Science.gov (United States)

    Hander, T A; Lancaster, J L; Kopp, D T; Lasher, J C; Blumhardt, R; Fox, P T

    1997-02-01

    An easy and rapid method for the measurement of the intrinsic spatial resolution of a gamma camera was developed. The measurement is based on the first and second statistical moments of regions of interest (ROIs) applied to bar phantom images. This leads to an estimate of the modulation transfer function (MTF) and the full-width-at-half-maximum (FWHM) of a line spread function (LSF). Bar phantom images were acquired using four large field-of-view (LFOV) gamma cameras (Scintronix, Picker, Searle, Siemens). The following factors important for routine measurements of gamma camera resolution with this method were tested: ROI placement and shape, phantom orientation, spatial sampling, and procedural consistency. A 0.2% coefficient of variation (CV) between repeat measurements of MTF was observed for a circular ROI. The CVs of less than 2% were observed for measured MTF values for bar orientations ranging from -10 degrees to +10 degrees with respect to the x and y axes of the camera acquisition matrix. A 256 x 256 matrix (1.6 mm pixel spacing) was judged sufficient for routine measurements, giving an estimate of the FWHM to within 0.1 mm of manufacturer-specified values (3% difference). Under simulated clinical conditions, the variation in measurements attributable to procedural effects yielded a CV of less than 2% in newer generation cameras. The moments method for determining MTF correlated well with a peak-valley method, with an average difference of 0.03 across the range of spatial frequencies tested (0.11-0.17 line pairs/mm, corresponding to 4.5-3.0 mm bars). When compared with the NEMA method for measuring intrinsic spatial resolution, the moments method was found to be within 4% of the expected FWHM.

  14. Performances evaluation of the coincidence detection on a gamma-camera

    International Nuclear Information System (INIS)

    Dreuille, O. de; Gaillard, J.F.; Brasse, D.; Bendriem, B.; Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.

    2000-01-01

    The performance of the VERTEX gamma-camera (ADAC) working in coincidence mode are investigated using a protocol derived from the NEMA and IEC recommendations. With a field of view determined by two rectangular detectors (50.8 cm x 40 cm) composed of NaI crystal, this camera allows a 3-D acquisition with different energy window configurations: photopeak-photopeak only (PP) and photopeak-photopeak + photopeak-Compton (PC). An energy resolution of 11% and a scatter fraction of 27% and 33% for the 3D-PP and 3D-PC mode respectively are the main significant results of our study. The spatial resolution equals 5.9 mm and the limit of the detectability ranges from 16 mm to 13 mm for a contrast of 2.5: as a function of the random estimation, the maximum of the Noise Equivalent Count rate varies from 3 kcps to 4.5 kcps for the PP mode and from 3.85 kcps to 6.1 kcps for the PC mode. These maxima are reached for a concentration of 8 kBq/ml for the PP mode and 5 kBq/ml for the PC mode. These values are compared with the results obtained by other groups for the VERTEX gamma camera and several dedicated PET systems. (authors)

  15. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  16. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  17. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  18. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  19. Astronomical Orientation Method Based on Lunar Observations Utilizing Super Wide Field of View

    Directory of Open Access Journals (Sweden)

    PU Junyu

    2018-04-01

    Full Text Available In this paper,astronomical orientation is achieved by observing the moon utilizing camera with super wide field of view,and formulae are deduced in detail.An experiment based on real observations verified the stability of the method.In this experiment,after 15 minutes' tracking shoots,the internal precision could be superior to ±7.5" and the external precision could approximately reach ±20".This camera-based method for astronomical orientation can change the traditional mode (aiming by human eye based on theodolite,thus lowering the requirements for operator's skill to some extent.Furthermore,camera with super wide field of view can realize the function of continuous tracking shoots on the moon without complicated servo control devices.Considering the similar existence of gravity on the moon and the earth's phase change when observed from the moon,once the technology of self-leveling is developed,this method can be extended to orientation for lunar rover by shooting the earth.

  20. A hidden view of wildlife conservation: How camera traps aid science, research and management

    Science.gov (United States)

    O'Connell, Allan F.

    2015-01-01

    Florida panthers are among the world’s most endangered — and elusive — animals. For approximately four decades, scientists have been researching this small population of panthers that inhabit the dense forests and swamps of south Florida. Because of their wide habitat range along with an absence of clear visual features, these animals are difficult to detect and identify. In 2013, however, researchers released a study that used camera trap images collected between 2005 and 2007 to generate the first statistically reliable density estimates for the remaining population of this subspecies.

  1. An operative gamma camera for sentinel lymph node procedure in case of breast cancer

    CERN Document Server

    Salvador, S; Mathelin, C; Guyonne, J; Huss, D

    2007-01-01

    Large field of view gamma cameras are widely used to perform lymphoscintigraphy in the sentinel lymph nodes (SLN) procedure in case of breast cancer. However, they are not specified for this application and their sizes do not enable their use in the operative room to control the excision of the all SLN. We present the results obtained with a prototype of a new mini gamma camera developed especially for the operative lymphoscintigraphy of the axillary area in case of breast cancer. This prototype is composed of 10 mm thick parallel lead collimator, a 2 mm thick GSO:Ce inorganic scintillating crystal from Hitachi and a Hamamatsu H8500 flat panel multianode (64 channels) photomultiplier tube (MAPMT) equipped with a dedicated electronics. Its actual field of view is 50 × 50mm2. The gamma interaction position in the GSO scintillating plate is obtained by calculating the center of gravity of the fired MAPMT channels. The measurements performed with this prototype demonstrate the usefulness of this mini gamma camer...

  2. Two-dimensional beam-profile monitor using the Reticon MC510A array camera

    International Nuclear Information System (INIS)

    Gottschalk, B.

    1981-08-01

    A quantitative two-dimensional beam profile may be obtained from a scintillator viewed by a Reticon camera which uses a 32 x 32 array of photodiodes as its sensing element. In this note, CAMAC-oriented data acquisition electronics which allow one either to transmit the profile to a computer, or to use the monitor in a stand-alone mode are described

  3. 100-ps framing-camera tube

    International Nuclear Information System (INIS)

    Kalibjian, R.

    1978-01-01

    The optoelectronic framing-camera tube described is capable of recording two-dimensional image frames with high spatial resolution in the <100-ps range. Framing is performed by streaking a two-dimensional electron image across narrow slits. The resulting dissected electron line images from the slits are restored into framed images by a restorer deflector operating synchronously with the dissector deflector. The number of framed images on the tube's viewing screen equals the number of dissecting slits in the tube. Performance has been demonstrated in a prototype tube by recording 135-ps-duration framed images of 2.5-mm patterns at the cathode. The limitation in the framing speed is in the external drivers for the deflectors and not in the tube design characteristics. Faster frame speeds in the <100-ps range can be obtained by use of faster deflection drivers

  4. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  5. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  6. Opportunity's View After Drive on Sol 1806 (Stereo)

    Science.gov (United States)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction. The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008). This view is presented as a cylindrical-perspective projection with geometric seam correction.

  7. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  8. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  9. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  10. View invariant gesture recognition using the CSEMSwissRanger SR-2 camera

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.; Fihl, Preben

    2008-01-01

    by a hysteresis bandpass filter. Gestures are represented by concatenating harmonic shape contexts over time. This representation allows for a view invariant matching of the gestures. The system is trained on gestures from one viewpoint and evaluated on gestures from other viewpoints. The results show...

  11. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  12. Exploring Virtual Worlds With Head-Mounted Displays

    Science.gov (United States)

    Chung, James C.; Harris, Mark R.; Brooks, Frederick P.; Fuchs, Henry; Kelley, Michael T.; Hughes, John W.; Ouh-Young, Ming; Cheung, Clement; Holloway, Richard L.; Pique, Michael

    1989-09-01

    For nearly a decade the University of North Carolina at Chapel Hill has been conducting research in the use of simple head-mounted displays in "real-world" applications. Such units provide the user with non-holographic true three-dimensional information, since the kinetic depth effect, stereoscopy, and other visual cues combine to immerse the user in a "virtual world" which behaves like the real world in some respects. UNC's head-mounted display was built inexpensively from commercially available off-the-shelf components. Tracking of the the user's head position and orientation is performed by a Polhemus Navigation Sciences' 3SPACE* tracker. The host computer uses the tracking information to generate updated images corresponding to the user's new left eye and right eye views. The images are broadcast to two liquid crystal television screens (220x320 pixels) mounted on a horizontal shelf at the user's forehead. The user views these color screens through half-silvered mirrors, enabling the computer-generated image to be superimposed upon the user's real physical environment. The head-mounted display has been incorporated into existing molecular modeling and architectural applications being developed at UNC. In molecular structure studies, chemists are presented with a room-sized molecule with which they can interact in a manner more intuitive than that provided by conventional two-dimensional displays and dial boxes. Walking around and through the large molecule may provide quicker understanding of its structure, and such problems as drug-enzyme docking may be approached with greater insight. In architecture, the head-mounted display enables clients to better appreciate three-dimensional designs, which may be misinterpreted in their conventional two-dimensional form by untrained eyes. The addition of a treadmill to the system provides additional kinesthetic input into the understanding of building size and scale.

  13. A practical head tracking system for motion correction in neurological SPECT and PET

    International Nuclear Information System (INIS)

    Fulton, R.R.; Eberl, S.; Meikle, S.; Hutton, B.F.; Braun, M.

    1998-01-01

    Full text: Patient motion during data acquisition can degrade the quality of SPECT and PET images. Techniques for motion correction in neurological studies in both modalities based on continuous monitoring of head position have been proposed. However difficulties in developing suitable head tracking systems have so far impeded clinical implementations. We have developed a head tracking system based on the mechanical ADL-1 tracker (Shooting Star Technology, Rosedale, Canada) on a Trionix triple-head SPECT camera A software driver running on a SUN Sparc host computer communicates with the tracker over a serial line providing up to 300 updates per second with angular and positional resolutions of 0.05 degrees and 0.2 mm respectively. The SUN Sparc workstation which acquires the SPECT study also communicates with the tracker, eliminating synchronisation problems. For motion correction, the motion parameters provided by the tracker within its own coordinate system must be converted to the camera's coordinate system. The conversion requires knowledge of the rotational relationships between the two coordinate systems and the displacement of their origins, both of which are determined from a calibration procedure. The tracker has been tested under clinical SPECT imaging conditions with a 3D Hoffman brain phantom. Multiple SPECT acquisitions were performed. After each acquisition the phantom was moved to a new position and orientation. Motion parameters reported by the tracker for each applied movement were compared with those obtained by applying an automated image registration program to the sequential reconstructed studies. Maximum differences were < 0.5 degrees and < 2mm, within the expected errors of the registration procedure. We conclude that this tracking system will be suitable for clinical evaluation of motion correction in SPECT and PET

  14. Virtual View Image over Wireless Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Gamantyo Hendrantoro

    2011-12-01

    Full Text Available In general, visual sensors are applied to build virtual view images. When number of visual sensors increases then quantity and quality of the information improves. However, the view images generation is a challenging task in Wireless Visual Sensor Network environment due to energy restriction, computation complexity, and bandwidth limitation. Hence this paper presents a new method of virtual view images generation from selected cameras on Wireless Visual Sensor Network. The aim of the paper is to meet bandwidth and energy limitations without reducing information quality. The experiment results showed that this method could minimize number of transmitted imageries with sufficient information.

  15. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    Science.gov (United States)

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  16. Single view reflectance capture using multiplexed scattering and time-of-flight imaging

    OpenAIRE

    Zhao, Shuang; Velten, Andreas; Raskar, Ramesh; Bala, Kavita; Naik, Nikhil Deepak

    2011-01-01

    This paper introduces the concept of time-of-flight reflectance estimation, and demonstrates a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single view-point, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-flight camera. The configuration collectively acquires dense angular, but l...

  17. Evaluation of intrafraction patient movement for CNS and head and neck IMRT

    International Nuclear Information System (INIS)

    Kim, Siyong; Akpati, Hilary C.; Kielbasa, Jerrold E.; Li, Jonathan G.; Liu, Chihray; Amdur, Robert J.; Palta, Jatinder R.

    2004-01-01

    Intrafraction patient motion is much more likely in intensity-modulated radiation therapy (IMRT) than in conventional radiotherapy primarily due to longer beam delivery times in IMRT treatment. In this study, we evaluated the uncertainty of intrafraction patient displacement in CNS and head and neck IMRT patients. Immobilization is performed in three steps: (1) the patient is immobilized with thermoplastic facemask, (2) the patient displacement is monitored using a commercial stereotactic infrared IR camera (ExacTrac, BrainLab) during treatment, and (3) repositioning is carried out as needed. The displacement data were recorded during beam-on time for the entire treatment duration for 5 patients using the camera system. We used the concept of cumulative time versus patient position uncertainty, referred to as an uncertainty time histogram (UTH), to analyze the data. UTH is a plot of the accumulated time during which a patient stays within the corresponding movement uncertainty. The University of Florida immobilization procedure showed an effective immobilization capability for CNS and head and neck IMRT patients by keeping the patient displacement less than 1.5 mm for 95% of treatment time (1.43 mm for 1, and 1.02 mm for 1, and less than 1.0 mm for 3 patients). The maximum displacement was 2.0 mm

  18. Improvement of brain single photon emission tomography (SPET) using transmission data acquisition in a four-head SPET scanner

    International Nuclear Information System (INIS)

    Murase, Kenya; Tanada, Shuji; Inoue, Takeshi; Sugawara, Yoshifumi; Hamamoto, Ken

    1993-01-01

    Attenuation coefficient maps (μ-maps) are a useful way to compensate for non-uniform attenuation when performing single photon emission tomography (SPET). A new method was developed to record single photon transmission data and a μ-map for the brain was produced using a four-head SPET scanner. Transmission data were acquired by a gamma camera of opposite to a flood radioactive source attached to one of four gamma cameras in the four-head SPET scanner. Attenuation correction was performed using the iterative expectation maximization algorithm and the μ-map. Phantom studies demonstrated that this method could reconstruct the distribution of radioactivity more accurately than conventional methods, even for a severely non-uniform μ-map, and could improve the quality of SPET images. Clinical application to technetium-99m hexamethyl-propylene amine oxime (HMPAO) brain SPET also demonstrated the usefulness of this method. Thus, this method appears to be promising for improvement in the image quality and quantitative accuracy of brain SPET. (orig.)

  19. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  20. On-Line High Dose-Rate Gamma Ray Irradiation Test of the CCD/CMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In this paper, test results of gamma ray irradiation to CCD/CMOS cameras are described. From the CAMS (containment atmospheric monitoring system) data of Fukushima Dai-ichi nuclear power plant station, we found out that the gamma ray dose-rate when the hydrogen explosion occurred in nuclear reactors 1{approx}3 is about 160 Gy/h. If assumed that the emergency response robot for the management of severe accident of the nuclear power plant has been sent into the reactor area to grasp the inside situation of reactor building and to take precautionary measures against releasing radioactive materials, the CCD/CMOS cameras, which are loaded with the robot, serve as eye of the emergency response robot. In the case of the Japanese Quince robot system, which was sent to carry out investigating the unit 2 reactor building refueling floor situation, 7 CCD/CMOS cameras are used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. In the preceding assumptions, a major problem which arises when dealing with CCD/CMOS cameras in the severe accident situations of the nuclear power plant is the presence of high dose-rate gamma irradiation fields. In the case of the DBA (design basis accident) situations of the nuclear power plant, in order to use a CCD/CMOS camera as an ad-hoc monitoring unit in the vicinity of high radioactivity structures and components of the nuclear reactor area, a robust survivability of this camera in such intense gamma-radiation fields therefore should be verified. The CCD/CMOS cameras of various types were gamma irradiated at a

  1. Evaluation of misplaced event count rate using a scintillation camera

    International Nuclear Information System (INIS)

    Yanagimoto, Shin-ichi; Tomomitsu, Tatsushi; Muranaka, Akira

    1985-01-01

    Misplaced event count rates were evaluated using an acryl scatter body of various thickness and a gamma camera. The count rate in the region of interest (ROI) within the camera view field, which was thought to represent part of the misplaced event count rate, increased as the thickness of the scatter body was increased to 5 cm, followed by a steep decline in the count rate. On the other hand, the ratio of the count rate in the ROI to the total count rate continuously increased as the thickness of the scatter body was increased. As the thickness of the scatter body was increased, the count rates increased, and the increments of increase were greater in the lower energy region of the photopeak than in the higher energy region. In ranges energy other than the photopeak, the influence of the scatter body on the count rate in the ROI was the greatest at 76 keV, which was the lowest energy we examined. (author)

  2. 200 ps FWHM and 100 MHz repetition rate ultrafast gated camera for optical medical functional imaging

    Science.gov (United States)

    Uhring, Wilfried; Poulet, Patrick; Hanselmann, Walter; Glazenborg, René; Zint, Virginie; Nouizi, Farouk; Dubois, Benoit; Hirschi, Werner

    2012-04-01

    The paper describes the realization of a complete optical imaging device to clinical applications like brain functional imaging by time-resolved, spectroscopic diffuse optical tomography. The entire instrument is assembled in a unique setup that includes a light source, an ultrafast time-gated intensified camera and all the electronic control units. The light source is composed of four near infrared laser diodes driven by a nanosecond electrical pulse generator working in a sequential mode at a repetition rate of 100 MHz. The resulting light pulses, at four wavelengths, are less than 80 ps FWHM. They are injected in a four-furcated optical fiber ended with a frontal light distributor to obtain a uniform illumination spot directed towards the head of the patient. Photons back-scattered by the subject are detected by the intensified CCD camera; there are resolved according to their time of flight inside the head. The very core of the intensified camera system is the image intensifier tube and its associated electrical pulse generator. The ultrafast generator produces 50 V pulses, at a repetition rate of 100 MHz and a width corresponding to the 200 ps requested gate. The photocathode and the Micro-Channel-Plate of the intensifier have been specially designed to enhance the electromagnetic wave propagation and reduce the power loss and heat that are prejudicial to the quality of the image. The whole instrumentation system is controlled by an FPGA based module. The timing of the light pulses and the photocathode gating is precisely adjustable with a step of 9 ps. All the acquisition parameters are configurable via software through an USB plug and the image data are transferred to a PC via an Ethernet link. The compactness of the device makes it a perfect device for bedside clinical applications.

  3. TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope

    Science.gov (United States)

    Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.

    Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.

  4. Intrajejunal Infusion of Levodopa-Carbidopa Gel Can Continuously Reduce the Severity of Dropped Head in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Hiroshi Kataoka

    2017-10-01

    Full Text Available Dropped head can occur in patients with Parkinson’s disease and make their quality of life unpleasant because they cannot obtain a frontal view. The pathophysiologic involvement of dopamine agonist or central or peripheral mechanisms has been proposed. Levodopa therapy with the withdrawal of dopamine agonists was sometimes effective, but the effect in most patients did not persist for the entire day. We describe a patient with Parkinson’s disease whose dropped head responded throughout the day to the continuous intrajejunal infusion of levodopa-carbidopa intestinal gel (LCIG. During off-periods before treatment with LCIG, severe akinesia and freezing of gait were evident, and she could not continuously obtain a frontal view because of the dropped head. About 20 min after the intrajejunal infusion of LCIG, these features remarkably improved, and she could obtain a frontal view. The angle of dropped head was improved from 39.39 to 14.04°. This case suggests that infusion of LCIG can reduce the severity of dropped head for a longer period than oral levodopa.

  5. Improving head and body pose estimation through semi-supervised manifold alignment

    KAUST Repository

    Heili, Alexandre; Varadarajan, Jagannadan; Ghanem, Bernard; Ahuja, Narendra; Odobez, Jean-Marc

    2014-01-01

    structure of the features in the train and target data and the need to align them were not explored despite the fact that the pose features between two datasets may vary according to the scene, e.g. due to different camera point of view or perspective

  6. Design and Expected Performance of GISMO-2, a Two Color Millimeter Camera for the IRAM 30 m Telescope

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Dwek, Eli; Hilton, Gene; Fixsen, Dale J.; Irwin, Kent; Jhabvala, Christine; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; hide

    2014-01-01

    We present the main design features for the GISMO-2 bolometer camera, which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISMO-2 will operate simultaneously in the 1 and 2 mm atmospherical windows. The 1 mm channel uses a 32 × 40 TES-based backshort under grid (BUG) bolometer array, the 2 mm channel operates with a 16 × 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISMO-2 was strongly influenced by our experience with the GISMO 2mm bolometer camera, which is successfully operating at the 30 m telescope. GISMO is accessible to the astronomical community through the regularIRAMcall for proposals.

  7. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  8. Gamma camera based FDG PET in oncology

    International Nuclear Information System (INIS)

    Park, C. H.

    2002-01-01

    Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories; conventional (c) and gamma camera based ( CB ) PET. CB PET is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more CB PET in operation than cPET in the USA. CB PET is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Our was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The following is a brief description of our clinical experience of FDG CBPET in oncology

  9. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  10. Neutral-beam performance analysis using a CCD camera

    International Nuclear Information System (INIS)

    Hill, D.N.; Allen, S.L.; Pincosy, P.A.

    1986-01-01

    We have developed an optical diagnostic system suitable for characterizing the performance of energetic neutral beams. An absolutely calibrated CCD video camera is used to view the neutral beam as it passes through a relatively high pressure (10 -5 Torr) region outside the neutralizer: collisional excitation of the fast deuterium atoms produces H/sub proportional to/ emission (lambda = 6561A) that is proportional to the local atomic current density, independent of the species mix of accelerated ions over the energy range 5 to 20 keV. Digital processing of the video signal provides profile and aiming information for beam optimization. 6 refs., 3 figs

  11. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  12. Concert Viewing Headphones

    Directory of Open Access Journals (Sweden)

    Kazuya Atsuta

    2011-01-01

    Full Text Available An audiovisual interface equipped with a projector, an inclina-tion sensor, and a distance sensor for zoom control has been developed that enables a user to selectively view and listen to specific performers in a video-taped group performance. Dubbed Concert Viewing Headphones, it has both image and sound processing functions. The image processing extracts the portion of the image indicated by the user and projects it free of distortion on the front and side walls. The sound processing creates imaginary microphones for those performers without one so that the user can hear the sound from any performer. Testing using images and sounds captured using a fisheye-lens camera and 37 lavalier microphones showed that sound locali-zation was fastest when an inverse square function was used for the sound mixing and that the zoom function was useful for locating the desired sound performance.

  13. Visibility of children behind 2010-2013 model year passenger vehicles using glances, mirrors, and backup cameras and parking sensors.

    Science.gov (United States)

    Kidd, David G; Brethwaite, Andrew

    2014-05-01

    This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with t