WorldWideScience

Sample records for camera path reconstruction

  1. Space and camera path reconstruction for omni-directional vision

    CERN Document Server

    Knill, Oliver

    2007-01-01

    In this paper, we address the inverse problem of reconstructing a scene as well as the camera motion from the image sequence taken by an omni-directional camera. Our structure from motion results give sharp conditions under which the reconstruction is unique. For example, if there are three points in general position and three omni-directional cameras in general position, a unique reconstruction is possible up to a similarity. We then look at the reconstruction problem with m cameras and n points, where n and m can be large and the over-determined system is solved by least square methods. The reconstruction is robust and generalizes to the case of a dynamic environment where landmarks can move during the movie capture. Possible applications of the result are computer assisted scene reconstruction, 3D scanning, autonomous robot navigation, medical tomography and city reconstructions.

  2. Robust Video Stabilization Using Particle Keypoint Update and l₁-Optimized Camera Path.

    Science.gov (United States)

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-02-10

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  3. Nonholonomic catheter path reconstruction using electromagnetic tracking

    Science.gov (United States)

    Lugez, Elodie; Sadjadi, Hossein; Akl, Selim G.; Fichtinger, Gabor

    2015-03-01

    Catheter path reconstruction is a necessary step in many clinical procedures, such as cardiovascular interventions and high-dose-rate brachytherapy. To overcome limitations of standard imaging modalities, electromagnetic tracking has been employed to reconstruct catheter paths. However, tracking errors pose a challenge in accurate path reconstructions. We address this challenge by means of a filtering technique incorporating the electromagnetic measurements with the nonholonomic motion constraints of the sensor inside a catheter. The nonholonomic motion model of the sensor within the catheter and the electromagnetic measurement data were integrated using an extended Kalman filter. The performance of our proposed approach was experimentally evaluated using the Ascension's 3D Guidance trakStar electromagnetic tracker. Sensor measurements were recorded during insertions of an electromagnetic sensor (model 55) along ten predefined ground truth paths. Our method was implemented in MATLAB and applied to the measurement data. Our reconstruction results were compared to raw measurements as well as filtered measurements provided by the manufacturer. The mean of the root-mean-square (RMS) errors along the ten paths was 3.7 mm for the raw measurements, and 3.3 mm with manufacturer's filters. Our approach effectively reduced the mean RMS error to 2.7 mm. Compared to other filtering methods, our approach successfully improved the path reconstruction accuracy by exploiting the sensor's nonholonomic motion constraints in its formulation. Our approach seems promising for a variety of clinical procedures involving reconstruction of a catheter path.

  4. 3D Surface Reconstruction and Automatic Camera Calibration

    Science.gov (United States)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  5. Reconstructing spectral reflectance from digital camera through samples selection

    Science.gov (United States)

    Cao, Bin; Liao, Ningfang; Yang, Wenming; Chen, Haobo

    2016-10-01

    Spectral reflectance provides the most fundamental information of objects and is recognized as the "fingerprint" of them, since reflectance is independent of illumination and viewing conditions. However, reconstructing high-dimensional spectral reflectance from relatively low-dimensional camera outputs is an illposed problem and most of methods requaired camera's spectral responsivity. We propose a method to reconstruct spectral reflectance from digital camera outputs without prior knowledge of camera's spectral responsivity. This method respectively averages reflectances of selected subset from main training samples by prescribing a limit to tolerable color difference between the training samples and the camera outputs. Different tolerable color differences of training samples were investigated with Munsell chips under D65 light source. Experimental results show that the proposed method outperforms classic PI method in terms of multiple evaluation criteria between the actual and the reconstructed reflectances. Besides, the reconstructed spectral reflectances are between 0-1, which make them have actual physical meanings and better than traditional methods.

  6. Registration of Sub-Sequence and Multi-Camera Reconstructions for Camera Motion Estimation

    Directory of Open Access Journals (Sweden)

    Michael Wand

    2010-08-01

    Full Text Available This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice.

  7. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    Science.gov (United States)

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  8. Iterative reconstruction of detector response of an Anger gamma camera

    Science.gov (United States)

    Morozov, A.; Solovov, V.; Alves, F.; Domingos, V.; Martins, R.; Neves, F.; Chepel, V.

    2015-05-01

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations.

  9. Robust 3D reconstruction with an RGB-D camera.

    Science.gov (United States)

    Wang, Kangkan; Zhang, Guofeng; Bao, Hujun

    2014-11-01

    We present a novel 3D reconstruction approach using a low-cost RGB-D camera such as Microsoft Kinect. Compared with previous methods, our scanning system can work well in challenging cases where there are large repeated textures and significant depth missing problems. For robust registration, we propose to utilize both visual and geometry features and combine SFM technique to enhance the robustness of feature matching and camera pose estimation. In addition, a novel prior-based multicandidates RANSAC is introduced to efficiently estimate the model parameters and significantly speed up the camera pose estimation under multiple correspondence candidates. Even when serious depth missing occurs, our method still can successfully register all frames together. Loop closure also can be robustly detected and handled to eliminate the drift problem. The missing geometry can be completed by combining multiview stereo and mesh deformation techniques. A variety of challenging examples demonstrate the effectiveness of the proposed approach.

  10. An experimental study of reconstruction accuracy using a 12-Camera Tomo-PIV system

    NARCIS (Netherlands)

    Lynch, K.; Scarano, F.

    2013-01-01

    A tomographic PIV system composed of a large number of cameras is used to experimentally investigate the relation between image particle density, number of cameras and the reconstruction quality. The large number of cameras allows to determine an asymptotic behavior for the object reconstruction ove

  11. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yunsu Bok

    2014-11-01

    Full Text Available This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  12. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    Science.gov (United States)

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  13. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  14. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    Science.gov (United States)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  15. Filtered backprojection proton CT reconstruction along most likely paths

    Energy Technology Data Exchange (ETDEWEB)

    Rit, Simon; Dedes, George; Freud, Nicolas; Sarrut, David; Letang, Jean Michel [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, 69008 Lyon (France)

    2013-03-15

    Purpose: Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm. Methods: The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms. Results: The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 - 1.6 mm compared to 1.0 - 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200 Degree-Sign scans. Conclusions: Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.

  16. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    Science.gov (United States)

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  17. zePPeLIN: Distributed Path Planning Using an Overhead Camera Network

    Directory of Open Access Journals (Sweden)

    Andreagiovanni Reina

    2014-08-01

    Full Text Available We introduce zePPeLIN, a distributed system designed to address the challenges of path planning in large, cluttered, dynamic environments. The objective is to define a sequence of instructions to precisely move a ground object (e.g., a mobile robot from an initial to a final configuration in an environment. zePPeLIN is based on a set of wirelessly networked overhead cameras. While each camera only covers a limited environment portion, the camera set fully covers the environment through the union of its fields of view. Path planning is performed in a fully distributed and cooperative way, based on potential diffusion over local Voronoi skeletons and local message exchanging. Additionally, the control of the moving object is fully distributed: it receives movement instructions from each camera when it enters that camera’s field of view. The overall task is made particularly challenging by intrinsic errors in the overlap in cameras’ fields of view. We study the performance of the system as a function of these errors, as well as its scalability for the size and density of the camera network. We also propose a few heuristics to improve performance and computational and communication efficiency. The reported results include both extensive simulation experiments and validation using a real camera network planning for a two-robot system.

  18. Stereo Reconstruction of Atmospheric Cloud Surfaces from Fish-Eye Camera Images

    Science.gov (United States)

    Katai-Urban, G.; Otte, V.; Kees, N.; Megyesi, Z.; Bixel, P. S.

    2016-06-01

    In this article a method for reconstructing atmospheric cloud surfaces using a stereo camera system is presented. The proposed camera system utilizes fish-eye lenses in a flexible wide baseline camera setup. The entire workflow from the camera calibration to the creation of the 3D point set is discussed, but the focus is mainly on cloud segmentation and on the image processing steps of stereo reconstruction. Speed requirements, geometric limitations, and possible extensions of the presented method are also covered. After evaluating the proposed method on artificial cloud images, this paper concludes with results and discussion of possible applications for such systems.

  19. Influence of camera calibration conditions on the accuracy of 3D reconstruction.

    Science.gov (United States)

    Poulin-Girard, Anne-Sophie; Thibault, Simon; Laurendeau, Denis

    2016-02-01

    For stereoscopic systems designed for metrology applications, the accuracy of camera calibration dictates the precision of the 3D reconstruction. In this paper, the impact of various calibration conditions on the reconstruction quality is studied using a virtual camera calibration technique and the design file of a commercially available lens. This technique enables the study of the statistical behavior of the reconstruction task in selected calibration conditions. The data show that the mean reprojection error should not always be used to evaluate the performance of the calibration process and that a low quality of feature detection does not always lead to a high mean reconstruction error.

  20. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    Science.gov (United States)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  1. Uncalibrated Path Planning in the Image Space for the Fixed Camera Configuration

    Institute of Scientific and Technical Information of China (English)

    LIANG Xin-Wu; HUANG Xin-Han; WANG Min

    2013-01-01

    Image-based visual servoing can be used to efficiently control the motion of robot manipulators.When the initial and the desired configurations are distant,however,as pointed out by many researchers,such a control approach can suffer from the convergence and stability problems due to its local properties.By specifying adequate image feature trajectories to be followed in the image,we can take advantage of the local convergence and stability of image-based visual servoing to avoid these problems.Hence,path planning in the image space has been an active research topic in robotics in recent years.However,almost all of the related results are established for the case of camera-in-hand configuration.In this paper,we propose an uncalibrated visual path planning algorithm for the case of fixed-camera configuration.This algorithm computes the trajectories of image features directly in the projective space such that they are compatible with rigid body motion.By decomposing the projective representations of the rotation and the translation into their respective canonical forms,we can easily interpolate their paths in the projective space.Then,the trajectories of image features in the image plane can be generated via projective paths.In this way,the knowledge of feature point structures and camera intrinsic parameters are not required.To validate the feasibility and performance of the proposed algorithm,simulation results based on the puma560 robot manipulator are given in this paper.

  2. Path-based Iterative Reconstruction (PBIR) for X-ray Computed Tomography

    CERN Document Server

    Wu, Meng; Yang, Qiao; Fahrig, Rebecca

    2015-01-01

    Model-based iterative reconstruction (MBIR) techniques have demonstrated many advantages in X-ray CT image reconstruction. The MBIR approach is often modeled as a convex optimization problem including a data fitting function and a penalty function. The tuning parameter value that regulates the strength of the penalty function is critical for achieving good reconstruction results but difficult to choose. In this work, we describe two path seeking algorithms that are capable of efficiently generating a series of MBIR images with different strengths of the penalty function. The root-mean-squared-differences of the proposed path seeking algorithms are below 4 HU throughout the entire reconstruction path. With the efficient path seeking algorithm, we suggest a path-based iterative reconstruction (PBIR) to obtain complete information from the scanned data and reconstruction model.

  3. 3D-guided CT reconstruction using time-of-flight camera

    Science.gov (United States)

    Ismail, Mahmoud; Taguchi, Katsuyuki; Xu, Jingyan; Tsui, Benjamin M. W.; Boctor, Emad M.

    2011-03-01

    We propose the use of a time-of-flight (TOF) camera to obtain the patient's body contour in 3D guided imaging reconstruction scheme in CT and C-arm imaging systems with truncated projection. In addition to pixel intensity, a TOF camera provides the 3D coordinates of each point in the captured scene with respect to the camera coordinates. Information from the TOF camera was used to obtain a digitized surface of the patient's body. The digitization points are transformed to X-Ray detector coordinates by registering the two coordinate systems. A set of points corresponding to the slice of interest are segmented to form a 2D contour of the body surface. Radon transform is applied to the contour to generate the 'trust region' for the projection data. The generated 'trust region' is integrated as an input to augment the projection data. It is used to estimate the truncated, unmeasured projections using linear interpolation. Finally the image is reconstructed using the combination of the estimated and the measured projection data. The proposed method is evaluated using a physical phantom. Projection data for the phantom were obtained using a C-arm system. Significant improvement in the reconstructed image quality near the truncation edges was observed using the proposed method as compared to that without truncation correction. This work shows that the proposed 3D guided CT image reconstruction using a TOF camera represents a feasible solution to the projection data truncation problem.

  4. Stereo camera-based intelligent UGV system for path planning and navigation

    Science.gov (United States)

    Lee, Jung-Suk; Ko, Jung-Hwan; Chungb, Dal-Do

    2006-08-01

    In this paper, a new real-time and intelligent mobile robot system for path planning and navigation using stereo camera embedded on the pan/tilt system is proposed. In the proposed system, face area of a moving person is detected from a sequence of the stereo image pairs by using the YCbCr color model and using the disparity map obtained from the left and right images captured by the pan/tilt-controlled stereo camera system and depth information can be detected. And then, the distance between the mobile robot system and the face of the moving person can be calculated from the detected depth information. Accordingly, based-on the analysis of these data, three-dimensional objects can be detected. Finally, by using these detected data, 2-D spatial map for a visually guided robot that can plan paths, navigate surrounding objects and explore an indoor environment is constructed. From some experiments on target tracking with 480 frames of the sequential stereo images, it is analyzed that error ratio between the calculated and measured values of the relative position is found to be very low value of 1.4 % on average. Also, the proposed target tracking system has achieved a high speed of 0.04 sec/frame for target detection and 0.06 sec/frame for target tracking.

  5. 3D Image Reconstruction from Compton camera data

    CERN Document Server

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  6. Generic camera model and its calibration for computational integral imaging and 3D reconstruction.

    Science.gov (United States)

    Li, Weiming; Li, Youfu

    2011-03-01

    Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

  7. A fast 3D reconstruction system with a low-cost camera accessory.

    Science.gov (United States)

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  8. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    Science.gov (United States)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  9. Neurolucida Lucivid versus Neurolucida camera: A quantitative and qualitative comparison of three-dimensional neuronal reconstructions.

    Science.gov (United States)

    Anderson, Kaeley; Yamamoto, Erin; Kaplan, Joshua; Hannan, Markus; Jacobs, Bob

    2010-02-15

    A critical issue in quantitative neuromorphology is the accuracy and subsequent reliability of the tracing techniques employed to characterize neuronal components. Historically, the camera lucida was the only option for such investigations. In 1987, MBF Bioscience, Inc. (Williston, VT) developed the integrative Neurolucida computer-microscope system, replacing the camera lucida drawing tube with a Lucivid cathode ray tube, thereby allowing computer overlays directly on the view through microscope oculars. Subsequent advances in digital cameras have allowed the Lucivid system to be replaced so that microscope images can be traced by viewing the digital image on a computer monitor. Indeed, the camera systems now outsell Lucivid systems 9 to 1 (J. Glaser, personal communication, 08/2008). Nevertheless, researchers seldom note which of these configurations are being used (which may confound the accuracy of data sharing), and there have been no published comparisons of the Lucivid and camera configurations. The present study thus assesses the relative accuracy of these two hardware configurations by examining reconstructions of human pyramidal neurons. We report significant differences with respect to dendritic spines, with the camera estimates of spine counts being greater than those obtained with the Lucivid system. Potential underlying reasons (e.g., magnification, illumination, and resolution, as well as observer ergonomic differences between the two systems) for these quantitative findings are explored here, along with qualitative observations on the relative strengths of each configuration. Copyright 2009 Elsevier B.V. All rights reserved.

  10. Remarks on 3D human body posture reconstruction from multiple camera images

    Science.gov (United States)

    Nagasawa, Yusuke; Ohta, Takako; Mutsuji, Yukiko; Takahashi, Kazuhiko; Hashimoto, Masafumi

    2007-12-01

    This paper proposes a human body posture estimation method based on back projection of human silhouette images extracted from multi-camera images. To achieve real-time 3D human body posture estimation, a server-client system is introduced into the multi-camera system, improvements of the background subtraction and back projection are investigated. To evaluate the feasibility of the proposed method, 3D estimation experiments of human body posture are carried out. The experimental system with six CCD cameras is composed and the experimental results confirm both the feasibility and effectiveness of the proposed system in the 3D human body posture estimation in real-time. By using the 3D reconstruction of human body posture, the simple walk-through application of virtual reality system is demonstrated.

  11. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m(3)) to 1:7000 (4.5×2.2×1.5m(3)) in agreement with the

  12. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras.

    Directory of Open Access Journals (Sweden)

    Jair E Garcia

    Full Text Available The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer.(1 The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2 Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample.(1 RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2 The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution.

  13. Semantically Documenting Virtual Reconstruction: Building a Path to Knowledge Provenance

    Science.gov (United States)

    Bruseker, G.; Guillem, A.; Carboni, N.

    2015-08-01

    The outcomes of virtual reconstructions of archaeological monuments are not just images for aesthetic consumption but rather present a scholarly argument and decision making process. They are based on complex chains of reasoning grounded in primary and secondary evidence that enable a historically probable whole to be reconstructed from the partial remains left in the archaeological record. This paper will explore the possibilities for documenting and storing in an information system the phases of the reasoning, decision and procedures that a modeler, with the support of an archaeologist, uses during the virtual reconstruction process and how they can be linked to the reconstruction output. The goal is to present a documentation model such that the foundations of evidence for the reconstructed elements, and the reasoning around them, are made not only explicit and interrogable but also can be updated, extended and reused by other researchers in future work. Using as a case-study the reconstruction of a kitchen in a Roman domus in Grand, we will examine the necessary documentation requirements, and the capacity to express it using semantic technologies. For our study we adopt the CIDOC-CRM ontological model, and its extensions CRMinf, CRMBa and CRMgeo as a starting point for modelling the arguments and relations.

  14. First use of mini gamma cameras for intra-operative robotic SPECT reconstruction.

    Science.gov (United States)

    Matthies, Philipp; Sharma, Kanishka; Okur, Ash; Gardiazabal, José; Vogel, Jakob; Lasserl, Tobias; Navab, Nassir

    2013-01-01

    Different types of nuclear imaging systems have been used in the past, starting with pre-operative gantry-based SPECT systems and gamma cameras for 2D imaging of radioactive distributions. The main applications are concentrated on diagnostic imaging, since traditional SPECT systems and gamma cameras are bulky and heavy. With the development of compact gamma cameras with good resolution and high sensitivity, it is now possible to use them without a fixed imaging gantry. Mounting the camera onto a robot arm solves the weight issue, while also providing a highly repeatable and reliable acquisition platform. In this work we introduce a novel robotic setup performing scans with a mini gamma camera, along with the required calibration steps, and show the first SPECT reconstructions. The results are extremely promising, both in terms of image quality as well as reproducibility. In our experiments, the novel setup outperformed a commercial fhSPECT system, reaching accuracies comparable to state-of-the-art SPECT systems.

  15. Linear stratified approach using full geometric constraints for 3D scene reconstruction and camera calibration.

    Science.gov (United States)

    Kim, Jae-Hean; Koo, Bon-Ki

    2013-02-25

    This paper presents a new linear framework to obtain 3D scene reconstruction and camera calibration simultaneously from uncalibrated images using scene geometry. Our strategy uses the constraints of parallelism, coplanarity, colinearity, and orthogonality. These constraints can be obtained in general man-made scenes frequently. This approach can give more stable results with fewer images and allow us to gain the results with only linear operations. In this paper, it is shown that all the geometric constraints used in the previous works performed independently up to now can be implemented easily in the proposed linear method. The study on the situations that cannot be dealt with by the previous approaches is also presented and it is shown that the proposed method being able to handle the cases is more flexible in use. The proposed method uses a stratified approach, in which affine reconstruction is performed first and then metric reconstruction. In this procedure, the additional constraints newly extracted in this paper have an important role for affine reconstruction in practical situations.

  16. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    Science.gov (United States)

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  17. Fast 3D-EM reconstruction using Planograms for stationary planar positron emission mammography camera.

    Science.gov (United States)

    Motta, A; Guerra, A Del; Belcari, N; Moehrs, S; Panetta, D; Righi, S; Valentini, D

    2005-12-01

    At the University of Pisa we are building a PEM prototype, the YAP-PEM camera, consisting of two opposite 6 x 6 x 3 cm3 detector heads of 30 x 30 YAP:Ce finger crystals, 2 x 2 x 30 mm3 each. The camera will be equipped with breast compressors. The acquisition will be stationary. Compared with a whole body PET scanner, a planar Positron Emission Mammography (PEM) camera allows a better, easier and more flexible positioning around the breast in the vicinity of the tumor: this increases the sensitivity and solid angle coverage, and reduces cost. To avoid software rejection of data during the reconstruction, resulting in a reduced sensitivity, we adopted a 3D-EM reconstruction which uses all of the collected Lines Of Response (LORs). This skips the PSF distortion given by data rebinning procedures and/or Fourier methods. The traditional 3D-EM reconstruction requires several times the computation of the LOR-voxel correlation matrix, or probability matrix {p(ij)}; therefore is highly time-consuming. We use the sparse and symmetry properties of the matrix {p(ij)} to perform fast 3D-EM reconstruction. Geometrically, a 3D grid of cubic voxels (FOV) is crossed by several divergent 3D line sets (LORs). The symmetries occur when tracing different LORs produces the same p(ij) value. Parallel LORs of different sets cross the FOV in the same way, and the repetition of p(ij) values depends on the ratio between the tube and voxel sizes. By optimizing this ratio, the occurrence of symmetries is increased. We identify a nucleus of symmetry of LORs: for each set of symmetrical LORs we choose just one LOR to be put in the nucleus, while the others lie outside. All of the possible p(ij) values are obtainable by tracking only the LORs of this nucleus. The coordinates of the voxels of all of the other LORs are given by means of simple translation rules. Before making the reconstruction, we trace the LORs of the nucleus to find the intersecting voxels, whose p(ij) values are computed and

  18. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    Science.gov (United States)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  19. 3D Reconstruction from a Single Still Image Based on Monocular Vision of an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available we propose a framework of combining Machine Learning with Dynamic Optimization for reconstructing scene in 3D automatically from a single still image of unstructured outdoor environment based on monocular vision of an uncalibrated camera. After segmenting image first time, a kind of searching tree strategy based on Bayes rule is used to identify the hierarchy of all areas on occlusion. After superpixel segmenting image second time, the AdaBoost algorithm is applied in the integration detection to the depth of lighting, texture and material. Finally, all the factors above are optimized with constrained conditions, acquiring the whole depthmap of an image. Integrate the source image with its depthmap in point-cloud or bilinear interpolation styles, realizing 3D reconstruction. Experiment in comparisons with typical methods in associated database demonstrates our method improves the reasonability of estimation to the overall 3D architecture of image’s scene to a certain extent. And it does not need any manual assist and any camera model information.

  20. Moving beyond flat earth: dense 3D scene reconstruction from a single FL-LWIR camera

    Science.gov (United States)

    Stone, K.; Keller, J. M.; Anderson, D. T.

    2013-06-01

    In previous work an automatic detection system for locating buried explosive hazards in forward-looking longwave infrared (FL-LWIR) and forward-looking ground penetrating radar (FL-GPR) data was presented. This system consists of an ensemble of trainable size-contrast filters prescreener coupled with a secondary classification step which extracts cell-structured image space features, such as local binary patterns (LBP), histogram of oriented gradients (HOG), and edge histogram descriptors (EHD), from multiple looks and classifies the resulting feature vectors using a support vector machine. Previously, this system performed image space to UTM coordinate mapping under a flat earth assumption. This limited its applicability to flat terrain and short standoff distances. This paper demonstrates a technique for dense 3D scene reconstruction from a single vehicle mounted FL-LWIR camera. This technique utilizes multiple views and standard stereo vision algorithms such as polar rectification and optimal correction. Results for the detection algorithm using this 3D scene reconstruction approach on data from recent collections at an arid US Army test site are presented. These results are compared to those obtained under the flat earth assumption, with special focus on rougher terrain and longer standoff distance than in previous experiments. The most recent collection also allowed comparison between uncooled and cooled FL-LWIR cameras for buried explosive hazard detection.

  1. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera.

    Science.gov (United States)

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho

    2015-12-10

    In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods.

  2. Calibration of time-of-flight cameras for accurate intraoperative surface reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Mersmann, Sven; Seitel, Alexander; Maier-Hein, Lena [Division of Medical and Biological Informatics, Junior Group Computer-assisted Interventions, German Cancer Research Center (DKFZ), Heidelberg, Baden-Wurttemberg 69120 (Germany); Erz, Michael; Jähne, Bernd [Heidelberg Collaboratory for Image Processing (HCI), University of Heidelberg, Baden-Wurttemberg 69115 (Germany); Nickel, Felix; Mieth, Markus; Mehrabi, Arianeb [Department of General, Visceral and Transplant Surgery, University of Heidelberg, Baden-Wurttemberg 69120 (Germany)

    2013-08-15

    Purpose: In image-guided surgery (IGS) intraoperative image acquisition of tissue shape, motion, and morphology is one of the main challenges. Recently, time-of-flight (ToF) cameras have emerged as a new means for fast range image acquisition that can be used for multimodal registration of the patient anatomy during surgery. The major drawbacks of ToF cameras are systematic errors in the image acquisition technique that compromise the quality of the measured range images. In this paper, we propose a calibration concept that, for the first time, accounts for all known systematic errors affecting the quality of ToF range images. Laboratory and in vitro experiments assess its performance in the context of IGS.Methods: For calibration the camera-related error sources depending on the sensor, the sensor temperature and the set integration time are corrected first, followed by the scene-specific errors, which are modeled as function of the measured distance, the amplitude and the radial distance to the principal point of the camera. Accounting for the high accuracy demands in IGS, we use a custom-made calibration device to provide reference distance data, the cameras are calibrated too. To evaluate the mitigation of the error, the remaining residual error after ToF depth calibration was compared with that arising from using the manufacturer routines for several state-of-the-art ToF cameras. The accuracy of reconstructed ToF surfaces was investigated after multimodal registration with computed tomography (CT) data of liver models by assessment of the target registration error (TRE) of markers introduced in the livers.Results: For the inspected distance range of up to 2 m, our calibration approach yielded a mean residual error to reference data ranging from 1.5 ± 4.3 mm for the best camera to 7.2 ± 11.0 mm. When compared to the data obtained from the manufacturer routines, the residual error was reduced by at least 78% from worst calibration result to most accurate

  3. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    Science.gov (United States)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  4. IPED: Inheritance Path-based Pedigree Reconstruction Algorithm Using Genotype Data

    Science.gov (United States)

    Wang, Zhanyong; Han, Buhm; Parida, Laxmi; Eskin, Eleazar

    2013-01-01

    Abstract The problem of inference of family trees, or pedigree reconstruction, for a group of individuals is a fundamental problem in genetics. Various methods have been proposed to automate the process of pedigree reconstruction given the genotypes or haplotypes of a set of individuals. Current methods, unfortunately, are very time-consuming and inaccurate for complicated pedigrees, such as pedigrees with inbreeding. In this work, we propose an efficient algorithm that is able to reconstruct large pedigrees with reasonable accuracy. Our algorithm reconstructs the pedigrees generation by generation, backward in time from the extant generation. We predict the relationships between individuals in the same generation using an inheritance path-based approach implemented with an efficient dynamic programming algorithm. Experiments show that our algorithm runs in linear time with respect to the number of reconstructed generations, and therefore, it can reconstruct pedigrees that have a large number of generations. Indeed it is the first practical method for reconstruction of large pedigrees from genotype data. PMID:24093229

  5. 3D Reconstruction of Static Human Body with a Digital Camera

    Science.gov (United States)

    Remondino, Fabio

    2003-01-01

    Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.

  6. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    Science.gov (United States)

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  7. PCA-based 3D Shape Reconstruction of Human Foot Using Multiple Viewpoint Cameras

    Institute of Scientific and Technical Information of China (English)

    Edmée Amstutz; Tomoaki Teshima; Makoto Kimura; Masaaki Mochimaru; Hideo Saito

    2008-01-01

    This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database,an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, which can characterize more than 92% of a foot, are defined by using the principal component analysis method. Then, using "active shape models", the initial 3D model is adapted to the real foot captured in multiple images by applying some constraints (edge points' distance and color variance). We insist here on the experiment part where we demonstrate the efficiency of the proposed method on a plastic foot model, and also on real human feet with various shapes. We propose and compare different ways of texturing the foot which is needed for reconstruction. We present an experiment performed on the plastic foot model and on human feet and propose two different ways to improve the final 3D shape's accuracy according to the previous experiments' results. The first improvement proposed is the densification of the cloud of points used to represent the initial model and the foot database. The second improvement concerns the projected patterns used to texture the foot. We conclude by showing the obtained results for a human foot with the average computed shape error being only 1.06mm.

  8. 3D reconstruction of a compressible flow by synchronized multi-camera BOS

    Science.gov (United States)

    Nicolas, F.; Donjat, D.; Léon, O.; Le Besnerais, G.; Champagnat, F.; Micheli, F.

    2017-05-01

    This paper investigates the application of a 3D density reconstruction from a limited number of background-oriented schlieren (BOS) images as recently proposed in Nicolas et al. (Exp Fluids 57(1):1-21, 2016), to the case of compressible flows, such as underexpanded jets. First, an optimization of a 2D BOS setup is conducted to mitigate the intense local blurs observed in raw BOS images and caused by strong density gradients present in the jets. It is demonstrated that a careful choice of experimental conditions enables one to obtain sharp deviation fields from 2D BOS images. Second, a 3DBOS experimental bench involving 12 synchronized cameras is specifically designed for the present study. It is shown that the 3DBOS method can provide physically consistent 3D reconstructions of instantaneous and mean density fields for various underexpanded jet flows issued into quiescent air. Finally, an analysis of the density structure of a moderately underexpanded jet is conducted through phase-averaging, highlighting the development of a large-scale coherent structure associated with a jet shear layer instability.

  9. Iterative reconstruction of SiPM light response functions in a square-shaped compact gamma camera.

    Science.gov (United States)

    Morozov, A; Alves, F; Marcos, J; Martins, R; Pereira, L; Solovov, V; Chepel, V

    2017-02-13

    Compact gamma cameras with a square-shaped monolithic scintillator crystal and an array of silicon photomultipliers (SiPMs) are actively being developed for applications in areas such as small animal imaging, cancer diagnostics and radiotracer guided surgery. Statistical methods of position reconstruction, which are potentially superior to the traditional centroid method, require accurate knowledge of the spatial response of each photomultiplier. Using both Monte Carlo simulations and experimental data obtained with a camera prototype, we show that the spatial response of all photomultipliers (light response functions) can be parameterized with axially symmetric functions obtained iteratively from flood field irradiation data. The study was performed with a camera prototype equipped with a 30  ×  30  ×  2 mm(3) LYSO crystal and an 8  ×  8 array of SiPMs for 140 keV gamma rays. The simulations demonstrate that the images, reconstructed with the maximum likelihood method using the response obtained with the iterative approach, exhibit only minor distortions: the average difference between the reconstructed and the true positions in X and Y directions does not exceed 0.2 mm in the central area of 22  ×  22 mm(2) and 0.4 mm at the periphery of the camera. A similar level of image distortions is shown experimentally with the camera prototype.

  10. Iterative reconstruction of SiPM light response functions in a square-shaped compact gamma camera

    Science.gov (United States)

    Morozov, A.; Alves, F.; Marcos, J.; Martins, R.; Pereira, L.; Solovov, V.; Chepel, V.

    2017-05-01

    Compact gamma cameras with a square-shaped monolithic scintillator crystal and an array of silicon photomultipliers (SiPMs) are actively being developed for applications in areas such as small animal imaging, cancer diagnostics and radiotracer guided surgery. Statistical methods of position reconstruction, which are potentially superior to the traditional centroid method, require accurate knowledge of the spatial response of each photomultiplier. Using both Monte Carlo simulations and experimental data obtained with a camera prototype, we show that the spatial response of all photomultipliers (light response functions) can be parameterized with axially symmetric functions obtained iteratively from flood field irradiation data. The study was performed with a camera prototype equipped with a 30  ×  30  ×  2 mm3 LYSO crystal and an 8  ×  8 array of SiPMs for 140 keV gamma rays. The simulations demonstrate that the images, reconstructed with the maximum likelihood method using the response obtained with the iterative approach, exhibit only minor distortions: the average difference between the reconstructed and the true positions in X and Y directions does not exceed 0.2 mm in the central area of 22  ×  22 mm2 and 0.4 mm at the periphery of the camera. A similar level of image distortions is shown experimentally with the camera prototype.

  11. Iterative reconstruction of SiPM light response functions in a square-shaped compact gamma camera

    CERN Document Server

    Morozov, A; Marcos, J; Martins, R; Pereira, L; Solovov, V; Chepel, V

    2016-01-01

    Compact gamma cameras with a square-shaped monolithic scintillator crystal and an array of silicon photomultipliers (SiPMs) are actively being developed for applications in areas such as small animal imaging, cancer diagnostics and radiotracer guided surgery. Statistical methods of position reconstruction, which are potentially superior to the traditional centroid method, require accurate knowledge of the spatial response of each photomultiplier. Using both Monte Carlo simulations and experimental data obtained with a camera prototype, we show that the spatial response of all photomultipliers (light response functions) can be parameterized with axially symmetric functions obtained iteratively from flood field irradiation data. The study was performed with a camera prototype equipped with a 30 x 30 x 2 mm3 LYSO crystal and an 8 x 8 array of SiPMs for 140 keV gamma rays. The simulations demonstrate that the images, reconstructed with the maximum likelihood method using the response obtained with the iterative a...

  12. Reconstruction of an effective magnon mean free path distribution from spin Seebeck measurements in thin films

    Science.gov (United States)

    Chavez-Angel, E.; Zarate, R. A.; Fuentes, S.; Guo, E. J.; Kläui, M.; Jakob, G.

    2017-01-01

    A thorough understanding of the mean-free-path (MFP) distribution of the energy carriers is crucial to engineer and tune the transport properties of materials. In this context, a significant body of work has investigated the phonon and electron MFP distribution, however, similar studies of the magnon MFP distribution have not been carried out so far. In this work, we used thickness-dependence measurements of the longitudinal spin Seebeck (LSSE) effect of yttrium iron garnet films to reconstruct the cumulative distribution of a SSE related effective magnon MFP. By using the experimental data reported by (Guo et al 2016 Phys. Rev. X 6 031012), we adapted the phonon MFP reconstruction algorithm proposed by (Minnich 2012 Phys. Rev. Lett. 109 205901) and apply it to magnons. The reconstruction showed that magnons with different MFP contribute in different manner to the total LSSE and the effective magnon MFP distribution spreads far beyond their typical averaged values.

  13. Tridimensional Reconstruction Applied to Cultural Heritage with the Use of Camera-Equipped UAV and Terrestrial Laser Scanner

    Directory of Open Access Journals (Sweden)

    Zhihua Xu

    2014-10-01

    Full Text Available No single sensor can acquire complete information by applying one or several multi-surveys to cultural object reconstruction. For instance, a terrestrial laser scanner (TLS usually obtains information on building facades, whereas aerial photogrammetry is capable of providing the perspective for building roofs. In this study, a camera-equipped unmanned aerial vehicle system (UAV and a TLS were used in an integrated design to capture 3D point clouds and thus facilitate the acquisition of whole information on an object of interest for cultural heritage. A camera network is proposed to modify the image-based 3D reconstruction or structure from motion (SfM method by taking full advantage of the flight control data acquired by the UAV platform. The camera network improves SfM performances in terms of image matching efficiency and the reduction of mismatches. Thus, this camera network modified SfM is employed to process the overlapping UAV image sets and to recover the scene geometry. The SfM output covers most information on building roofs, but has sparse resolution. The dense multi-view 3D reconstruction algorithm is then applied to improve in-depth detail. The two groups of point clouds from image reconstruction and TLS scanning are registered from coarse to fine with the use of an iterative method. This methodology has been tested on one historical monument in Fujian Province, China. Results show a final point cloud with complete coverage and in-depth details. Moreover, findings demonstrate that these two platforms, which integrate the scanning principle and image reconstruction methods, can supplement each other in terms of coverage, sensing resolution, and model accuracy to create high-quality 3D recordings and presentations.

  14. Sensing and reconstruction of arbitrary light-in-flight paths by a relativistic imaging approach

    Science.gov (United States)

    Laurenzis, Martin; Klein, Jonathan; Bacher, Emmanuel; Metzger, Nicolas; Christnacher, Frank

    2016-10-01

    Transient light imaging is an emerging technology and interesting sensing approach for fundamental multidisciplinary research ranging from computer science to remote sensing. Recent developments in sensor technologies and computational imaging has made this emerging sensing approach a candidate for next generation sensor systems with rapidly increasing maturity but still relay on laboratory technology demonstrations. At ISL, transient light sensing is investigated by time correlated single photon counting (TCSPC). An eye-safe shortwave infrared (SWIR) TCSPC setup, consisting of an avalanche photodiode array and a pulsed fiber laser source, is used to investigate sparsely scattered light while propagating through air. Fundamental investigation of light in light are carried out with the aim to reconstruct the propagation path of arbitrary light paths. Light pulses are observed in light at various propagation angles and distances. As demonstrated, arbitrary light paths can be distinguished due to a relativistic effect leading to a distortion of temporal signatures. A novel method analyzing the time difference of arrival (TDOA) is carried out to determine the propagation angle and distance with respect to this relativistic effect. Based on our results, the performance of future laser warning receivers can be improved by the use of single photon counting imaging devices. They can detect laser light even when the laser does not directly hit the sensor or is passing at a certain distance.

  15. Influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning

    Science.gov (United States)

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2017-07-01

    This paper studies the influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning. The research purpose is to quantify and to localize heterogeneous matrices while investigating the recovery of linear attenuation coefficients (LACs) maps in 200 liter drums. Two different path length computation models so called ;point to point (PP); model and ;point to detector (PD); model are coupled with two different transmission reconstruction algorithms - Algebraic Reconstruction Technique (ART) with non-negativity constraint, and Maximum Likelihood Expectation Maximization (MLEM), respectively. Thus 4 modes are formed: ART-PP, ART-PD, MLEM-PP, MLEM-PD. The inter-comparison of transmission reconstruction qualities of these 4 modes is taken into account for heterogeneous matrices in the radioactive waste drums. Results illustrate that transmission-reconstructed qualities of MLEM algorithm are better than ART algorithm to get the most accurate LACs maps in good agreement with the reference data simulated by Monte Carlo. Moreover, PD model can be used to assay higher density waste drum and has a greater scope of application than PP model in TGS.

  16. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Science.gov (United States)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  17. Reduction in camera-specific variability in [(123)I]FP-CIT SPECT outcome measures by image reconstruction optimized for multisite settings

    DEFF Research Database (Denmark)

    Buchert, Ralph; Kluge, Andreas; Tossici-Bolt, Livia

    2016-01-01

    reconstruction algorithm for its ability to reduce camera-specific intersubject variability in [(123)I]FP-CIT SPECT. The secondary aim was to evaluate binding in whole brain (excluding striatum) as a reference for quantitative analysis. METHODS: Of 73 healthy subjects from the European Normal Control Database...... of [(123)I]FP-CIT recruited at six centres, 70 aged between 20 and 82 years were included. SPECT images were reconstructed using the QSPECT software package which provides fully automated detection of the outer contour of the head, camera-specific correction for scatter and septal penetration...... by transmission-dependent convolution subtraction, iterative OSEM reconstruction including attenuation correction, and camera-specific "to kBq/ml" calibration. LINK and HERMES reconstruction were used for head-to-head comparison. The specific striatal [(123)I]FP-CIT binding ratio (SBR) was computed using...

  18. Light-field camera-based 3D volumetric particle image velocimetry with dense ray tracing reconstruction technique

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio

    2017-07-01

    This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.

  19. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  20. Filtered back-projection reconstruction for attenuation proton CT along most likely paths.

    Science.gov (United States)

    Quiñones, C T; Létang, J M; Rit, S

    2016-05-07

    This work investigates the attenuation of a proton beam to reconstruct the map of the linear attenuation coefficient of a material which is mainly caused by the inelastic interactions of protons with matter. Attenuation proton computed tomography (pCT) suffers from a poor spatial resolution due to multiple Coulomb scattering (MCS) of protons in matter, similarly to the conventional energy-loss pCT. We therefore adapted a recent filtered back-projection algorithm along the most likely path (MLP) of protons for energy-loss pCT (Rit et al 2013) to attenuation pCT assuming a pCT scanner that can track the position and the direction of protons before and after the scanned object. Monte Carlo simulations of pCT acquisitions of density and spatial resolution phantoms were performed to characterize the new algorithm using Geant4 (via Gate). Attenuation pCT assumes an energy-independent inelastic cross-section, and the impact of the energy dependence of the inelastic cross-section below 100 MeV showed a capping artifact when the residual energy was below 100 MeV behind the object. The statistical limitation has been determined analytically and it was found that the noise in attenuation pCT images is 411 times and 278 times higher than the noise in energy-loss pCT images for the same imaging dose at 200 MeV and 300 MeV, respectively. Comparison of the spatial resolution of attenuation pCT images with a conventional straight-line path binning showed that incorporating the MLP estimates during reconstruction improves the spatial resolution of attenuation pCT. Moreover, regardless of the significant noise in attenuation pCT images, the spatial resolution of attenuation pCT was better than that of conventional energy-loss pCT in some studied situations thanks to the interplay of MCS and attenuation known as the West-Sherwood effect.

  1. Data acquisition, event building and signal reconstruction for Compton camera imaging

    OpenAIRE

    Nurdan, Kivanc

    2006-01-01

    Nurdan, Kıvanç Die Compton-Camera ist ein neues Detektorsystem zur bildlichen Darstellung von radioaktiven Markierungssubstanzen im Bereich der Gammastrahlung, das prinzipiell eine besonders gute Ortsauflösung und hohe Empfindlichkeit liefern kann. Die ist von grosser Bedeutung für die moderne bio-medizinische Forschung oder für darauf basierende diagnostische Verfahren. Dafür ist ein neues Konzept einer software-basierten Koinzidenzmessung entwickelt worden, das auf den b...

  2. Data Acquisition and Image Reconstruction Systems from the miniPET Scanners to the CARDIOTOM Camera

    Science.gov (United States)

    Valastván, I.; Imrek, J.; Hegyesi, G.; Molnár, J.; Novák, D.; Bone, D.; Kerek, A.

    2007-11-01

    Nuclear imaging devices play an important role in medical diagnosis as well as drug research. The first and second generation data acquisition systems and the image reconstruction library developed provide a unified hardware and software platform for the miniPET-I, miniPET-II small animal PET scanners and for the CARDIOTOM™.

  3. Summer Student Project Report. Parallelization of the path reconstruction algorithm for the inner detector of the ATLAS experiment.

    CERN Document Server

    Maldonado Puente, Bryan Patricio

    2014-01-01

    The inner detector of the ATLAS experiment has two types of silicon detectors used for tracking: Pixel Detector and SCT (semiconductor tracker). Once a proton-proton collision occurs, the result- ing particles pass through these detectors and these are recorded as hits on the detector surfaces. A medium to high energy particle passes through seven different surfaces of the two detectors, leaving seven hits, while lower energy particles can leave many more hits as they circle through the detector. For a typical event during the expected operational conditions, there are 30 000 hits in average recorded by the sensors. Only high energy particles are of interest for physics analysis and are taken into account for the path reconstruction; thus, a filtering process helps to discard the low energy particles produced in the collision. The following report presents a solution for increasing the speed of the filtering process in the path reconstruction algorithm.

  4. On the accuracy in birds' paths reconstruction using movement direction recorders : some results from a computer based simulation

    OpenAIRE

    Bramanti, Mauro

    1992-01-01

    In recent years there has been a growing interest in the use of movement direction recorders (MDR) to reconstruct the paths of free ranging animals, particularly of birds ( Bramanti, Dall'Antonia & Papi, 1988; Wilson &Wilson, 1988; Wilson R. P., Wilson, M.-P.,Link, Mempel & Adams, 1991 ). MDR is a device able to measure and store the direction of animal's attitude with respect to an external fixed reference, for example, with respect to the local earth magnetic field. The direction measures a...

  5. Pose and Shape Reconstruction of a Noncooperative Spacecraft Using Camera and Range Measurements

    Directory of Open Access Journals (Sweden)

    Renato Volpe

    2017-01-01

    Full Text Available Recent interest in on-orbit proximity operations has pushed towards the development of autonomous GNC strategies. In this sense, optical navigation enables a wide variety of possibilities as it can provide information not only about the kinematic state but also about the shape of the observed object. Various mission architectures have been either tested in space or studied on Earth. The present study deals with on-orbit relative pose and shape estimation with the use of a monocular camera and a distance sensor. The goal is to develop a filter which estimates an observed satellite’s relative position, velocity, attitude, and angular velocity, along with its shape, with the measurements obtained by a camera and a distance sensor mounted on board a chaser which is on a relative trajectory around the target. The filter’s efficiency is proved with a simulation on a virtual target object. The results of the simulation, even though relevant to a simplified scenario, show that the estimation process is successful and can be considered a promising strategy for a correct and safe docking maneuver.

  6. Real-time 3D Eye Performance Reconstruction for RGBD Cameras.

    Science.gov (United States)

    Wen, Quan; Xu, Feng; Yong, Jun-Hai

    2016-12-19

    This paper proposes a real-time method for 3D eye performance reconstruction using a single RGBD sensor. Combined with facial surface tracking, our method generates more pleasing facial performance with vivid eye motions. In our method, a novel scheme is proposed to estimate eyeball motions by minimizing the differences between a rendered eyeball and the recorded image. Our method considers and handles different appearances of human irises, lighting variations and highlights on images via the proposed eyeball model and the L0-based optimization. Robustness and real-time optimization are achieved through the novel 3D Taylor expansion-based linearization. Furthermore, we propose an online bidirectional regression method to handle occlusions and other tracking failures on either of the two eyes from the information of the opposite eye. Experiments demonstrate that our technique achieves robust and accurate eye performance reconstruction for different iris appearances, with various head/face/eye motions, and under different lighting conditions.

  7. Comparative analysis of iterative reconstruction algorithms with resolution recovery and new solid state cameras dedicated to myocardial perfusion imaging.

    Science.gov (United States)

    Brambilla, Marco; Lecchi, Michela; Matheoud, Roberta; Leva, Lucia; Lucignani, Giovanni; Marcassa, Claudio; Zoccarato, Orazio

    2017-03-23

    New technologies are available in myocardial perfusion imaging. They include new software that recovers image resolution and limits image noise, multifocal collimators and dedicated cardiac cameras in which solid-state detectors are used and all available detectors are constrained to imaging just the cardiac field of view. These innovations resulted in shortened study times or reduced administered activity to patients, while preserving image quality. Many single center and some multicenter studies have been published during the introduction of these innovations in the clinical practice. Most of these studies were lead in the framework of "agreement studies" between different methods of clinical measurement. They aimed to demonstrate that these new software/hardware solutions allow the acquisition of images with reduced acquisition time or administered activity with comparable results (as for image quality, image interpretation, perfusion defect quantification, left ventricular volumes and ejection fraction) to the standard-time or standard-dose SPECT acquired with a conventional gamma camera and reconstructed with the traditional FBP method, considered as the gold standard. The purpose of this review is to provide the reader with a comprehensive understanding of the pro and cons of the different approaches summarizing the achievements reached so far and the issues that need further investigations.

  8. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  9. A particle filter to reconstruct a free-surface flow from a depth camera

    Science.gov (United States)

    Combés, Benoit; Heitz, Dominique; Guibert, Anthony; Mémin, Etienne

    2015-10-01

    We investigate the combined use of a kinect depth sensor and of a stochastic data assimilation (DA) method to recover free-surface flows. More specifically, we use a weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This DA scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor in capturing the temporal sequences of depth observations is investigated. Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottomed tank. It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs the velocity and height of the free surface flow based on noisy measurements of the elevation alone.

  10. A particle filter to reconstruct a free-surface flow from a depth camera

    Energy Technology Data Exchange (ETDEWEB)

    Combés, Benoit; Heitz, Dominique; Guibert, Anthony [IRSTEA, UR TERE, 17 avenue de Cucillé, F-35044 Rennes Cedex (France); Mémin, Etienne, E-mail: dominique.heitz@irstea.fr, E-mail: etienne.memin@inria.fr [INRIA, Fluminance group, Campus universitaire de Beaulieu, F-35042 Rennes Cedex (France)

    2015-10-15

    We investigate the combined use of a kinect depth sensor and of a stochastic data assimilation (DA) method to recover free-surface flows. More specifically, we use a weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This DA scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor in capturing the temporal sequences of depth observations is investigated. Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottomed tank. It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs the velocity and height of the free surface flow based on noisy measurements of the elevation alone. (paper)

  11. Probabilistic models and numerical calculation of system matrix and sensitivity in list-mode MLEM 3D reconstruction of Compton camera images.

    Science.gov (United States)

    Maxim, Voichita; Lojacono, Xavier; Hilaire, Estelle; Krimmer, Jochen; Testa, Etienne; Dauvergne, Denis; Magnin, Isabelle; Prost, Rémy

    2016-01-01

    This paper addresses the problem of evaluating the system matrix and the sensitivity for iterative reconstruction in Compton camera imaging. Proposed models and numerical calculation strategies are compared through the influence they have on the three-dimensional reconstructed images. The study attempts to address four questions. First, it proposes an analytic model for the system matrix. Second, it suggests a method for its numerical validation with Monte Carlo simulated data. Third, it compares analytical models of the sensitivity factors with Monte Carlo simulated values. Finally, it shows how the system matrix and the sensitivity calculation strategies influence the quality of the reconstructed images.

  12. Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction.

    Science.gov (United States)

    Albiol, Francisco; Corbi, Alberto; Albiol, Alberto

    2016-08-01

    We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.

  13. A practical iterative two-view metric reconstruction with uncalibrated cameras

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a practical iterative algorithm for two-view metric reconstruction without any prior knowledge about the scene and motion in a nonsingular geometry configuration. The principal point is assumed to locate at the image center with zero skew and the same aspect ratio, and the interior parameters are fixed, so the self-calibration becomes focal-length calibration. Existing focal length calibration methods are direct solutions of a quadric composed of fundamental matrix, which are sensitive to noise. A quaternion-based linear iterative Least-Square Method is proposed in this paper, and one-dimensional searching for optimal focal length in a constrained region instead of solving optimization problems with inequality constraints is applied to simplify the computation complexity, then unique rotational matrix and translate vector are recovered. Experiments with simulation data and real images are given to verify the algorithm.

  14. Multi-Camera Reconstruction of Fine Scale High Speed Auroral Dynamics

    Science.gov (United States)

    Hirsch, M.; Semeter, J. L.; Zettergren, M. D.; Dahlgren, H.; Goenka, C.; Akbari, H.

    2014-12-01

    The fine spatial structure of dispersive aurora is known to have ground-observable scales of less than 100 meters. The lifetime of prompt emissions is much less than 1 millisecond, and high-speed cameras have observed auroral forms with millisecond scale morphology. Satellite observations have corroborated these spatial and temporal findings. Satellite observation platforms give a very valuable yet passing glance at the auroral region and the precipitation driving the aurora. To gain further insight into the fine structure of accelerated particles driven into the ionosphere, ground-based optical instruments staring at the same region of sky can capture the evolution of processes evolving on time scales from milliseconds to many hours, with continuous sample rates of 100Hz or more. Legacy auroral tomography systems have used baselines of hundreds of kilometers, capturing a "side view" of the field-aligned auroral structure. We show that short baseline (less than 10 km), high speed optical observations fill a measurement gap between legacy long baseline optical observations and incoherent scatter radar. The ill-conditioned inverse problem typical of auroral tomography, accentuated by short baseline optical ground stations is tackled with contemporary data inversion algorithms. We leverage the disruptive electron multiplying charge coupled device (EMCCD) imaging technology and solve the inverse problem via eigenfunctions obtained from a first-principles 1-D electron penetration ionospheric model. We present the latest analysis of observed auroral events from the Poker Flat Research Range near Fairbanks, Alaska. We discuss the system-level design and performance verification measures needed to ensure consistent performance for nightly multi-terabyte data acquisition synchronized between stations to better than 1 millisecond.

  15. A particle filter to reconstruct a free-surface flow from a depth camera

    CERN Document Server

    Combès, Benoit; Guibert, Anthony; Mémin, Etienne

    2016-01-01

    We investigate the combined use of a Kinect depth sensor and of a stochastic data assimilation method to recover free-surface flows. More specifically, we use a Weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This data assimilation scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor to capture temporal sequences of depth observations is investigated. Finally,...

  16. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  17. Terminal area automatic navigation, guidance, and control research using the Microwave Landing System (MLS). Part 4: Transition path reconstruction along a straight line path containing a glideslope change waypoint

    Science.gov (United States)

    Pines, S.

    1982-01-01

    The necessary algorithms to reconstruct the glideslope change waypoint along a straight line in the event the aircraft encounters a valid MLS update and transition in the terminal approach area are presented. Results of a simulation of the Langley B737 aircraft utilizing these algorithms are presented. The method is shown to reconstruct the necessary flight path during MLS transition resulting in zero cross track error, zero track angle error, and zero altitude error, thus requiring minimal aircraft response.

  18. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition.

    Science.gov (United States)

    McEvoy, John F; Hall, Graham P; McDonald, Paul G

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys.

  19. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition

    Directory of Open Access Journals (Sweden)

    John F. McEvoy

    2016-03-01

    Full Text Available The use of unmanned aerial vehicles (UAVs for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models or 40m above individuals (multirotor models. Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys.

  20. 2-D reconstruction of atmospheric concentration peaks from horizontal long path DOAS tomographic measurements: parametrisation and geometry within a discrete approach

    Directory of Open Access Journals (Sweden)

    A. Hartl

    2005-11-01

    Full Text Available In this study, we theoretically investigate the reconstruction of 2-D cross sections through Gaussian concentration distributions, e.g. emission plumes, from long path DOAS measurements along a limited number of light paths. This is done systematically with respect to the extension of the up to four peaks and for six different measurement setups with 2–4 telescopes and 36 light paths each. We distinguish between cases with and without additional background concentrations. Our approach parametrises the unknown distribution by local piecewise constant or linear functions on a regular grid and solves the resulting discrete, linear system by a least squares minimum norm principle. We show that the linear parametrisation not only allows better representation of the distributions in terms of discretisation errors, but also better inversion of the system. We calculate area integrals of the concentration field (i.e. total emissions rates for non-vanishing perpendicular wind speed components and show that reconstruction errors and reconstructed area integrals within the peaks for narrow distributions crucially depend on the resolution of the reconstruction grid. A recently suggested grid translation method for the piecewise constant basis functions, combining reconstructions from several shifted grids, is modified for the linear basis functions and proven to reduce overall reconstruction errors, but not the uncertainty of concentration integrals. We suggest a procedure to subtract additional background concentration fields before inversion. We find large differences in reconstruction quality between the geometries and conclude that, in general, for a constant number of light paths increasing the number of telescopes leads to better reconstruction results. It appears that geometries that give better results for negligible measurement errors and parts of the geometry that are better resolved are also less sensitive to increasing measurement errors.

  1. 2-D reconstruction of atmospheric concentration peaks from horizontal long path DOAS tomographic measurements: parametrisation and geometry within a discrete approach

    Directory of Open Access Journals (Sweden)

    A. Hartl

    2006-01-01

    Full Text Available In this study, we theoretically investigate the reconstruction of 2-D cross sections through Gaussian concentration distributions, e.g. emission plumes, from long path DOAS measurements along a limited number of light paths. This is done systematically with respect to the extension of the up to four peaks and for six different measurement setups with 2-4 telescopes and 36 light paths each. We distinguish between cases with and without additional background concentrations. Our approach parametrises the unknown distribution by local piecewise constant or linear functions on a regular grid and solves the resulting discrete, linear system by a least squares minimum norm principle. We show that the linear parametrisation not only allows better representation of the distributions in terms of discretisation errors, but also better inversion of the system. We calculate area integrals of the concentration field (i.e. total emissions rates for non-vanishing perpendicular wind speed components and show that reconstruction errors and reconstructed area integrals within the peaks for narrow distributions crucially depend on the resolution of the reconstruction grid. A recently suggested grid translation method for the piecewise constant basis functions, combining reconstructions from several shifted grids, is modified for the linear basis functions and proven to reduce overall reconstruction errors, but not the uncertainty of concentration integrals. We suggest a procedure to subtract additional background concentration fields before inversion. We find large differences in reconstruction quality between the geometries and conclude that, in general, for a constant number of light paths increasing the number of telescopes leads to better reconstruction results. It appears that geometries that give better results for negligible measurement errors and parts of the geometry that are better resolved are also less sensitive to increasing measurement errors.

  2. A Proposal and Implement of Detection and Reconstruction Method of Contact Shape with Horizon View Camera for Calligraphy Education Support System

    Science.gov (United States)

    Tobitani, Kensuke; Yamamoto, Kazuhiko; Kato, Kunihito

    In this study, we are concerned with calligraphy education support system. In current calligraphy education in Japan, teachers evaluate character written by students and they teach correct writing process based on the evaluation of the written character. Professionals in calligraphy can estimate writing process of character and balance of character which are important points for evaluation of character by estimating movement of contact shape (contact faces with paper and brush). But it takes a lot of time for students to be able to learn how to write correct character in this education way. If teachers and students can know movement of the contact shape, calligraphy education will be more efficient. However, it is difficult to detect contact shape from an images captured by cameras set in general angle. Because brush and ink are black either. So, contact shape is hided under the brush. In this paper, we propose new camera system consists of four Horizon View Cameras (HVC) which are special camera setting to detect and reconstruct contact shape, experiment with this system, and compare movement of contact shape of professionals and amateurs.

  3. A new method of reconstructing current paths in HTS tapes with defects

    Science.gov (United States)

    Podlivaev, Alexey; Rudnev, Igor

    2017-03-01

    We propose a new method for calculating current paths in high-temperature superconductive (HTS) tapes with various defects including cracks, non-superconducting inclusions, and superconducting inclusions with lower local critical current density. The calculation method is based on a model of critical state which takes into account the dependence of the critical current on the magnetic field. The method allows us to calculate the spatial distribution of currents flowing through the defective HTS tape for both currents induced by the external magnetic field and transport currents from an external source. For both cases, we performed simulations of the current distributions in these tapes with different types of defects and have shown that the combination of the action of the magnetic field and transport current leads to a more detailed identification of the boundaries and shape of the defects. The proposed method is adapted for calculating modern superconductors in real superconducting devices and may be more useful as compared to the conventional magnetometric diagnostic studies, when the tape is affected by the magnetic field only.

  4. Accelerating fDOT image reconstruction based on path-history fluorescence Monte Carlo model by using three-level parallel architecture.

    Science.gov (United States)

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Luo, Qingming

    2015-10-05

    The excessive time required by fluorescence diffuse optical tomography (fDOT) image reconstruction based on path-history fluorescence Monte Carlo model is its primary limiting factor. Herein, we present a method that accelerates fDOT image reconstruction. We employ three-level parallel architecture including multiple nodes in cluster, multiple cores in central processing unit (CPU), and multiple streaming multiprocessors in graphics processing unit (GPU). Different GPU memories are selectively used, the data-writing time is effectively eliminated, and the data transport per iteration is minimized. Simulation experiments demonstrated that this method can utilize general-purpose computing platforms to efficiently implement and accelerate fDOT image reconstruction, thus providing a practical means of using path-history-based fluorescence Monte Carlo model for fDOT imaging.

  5. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    Science.gov (United States)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  6. Gamma-ray detection and Compton camera image reconstruction with application to hadron therapy; Detection des rayons gamma et reconstruction d'images pour la camera Compton: Application a l'hadrontherapie

    Energy Technology Data Exchange (ETDEWEB)

    Frandes, M.

    2010-09-15

    A novel technique for radiotherapy - hadron therapy - irradiates tumors using a beam of protons or carbon ions. Hadron therapy is an effective technique for cancer treatment, since it enables accurate dose deposition due to the existence of a Bragg peak at the end of particles range. Precise knowledge of the fall-off position of the dose with millimeters accuracy is critical since hadron therapy proved its efficiency in case of tumors which are deep-seated, close to vital organs, or radio-resistant. A major challenge for hadron therapy is the quality assurance of dose delivery during irradiation. Current systems applying positron emission tomography (PET) technologies exploit gamma rays from the annihilation of positrons emitted during the beta decay of radioactive isotopes. However, the generated PET images allow only post-therapy information about the deposed dose. In addition, they are not in direct coincidence with the Bragg peak. A solution is to image the complete spectrum of the emitted gamma rays, including nuclear gamma rays emitted by inelastic interactions of hadrons to generated nuclei. This emission is isotropic, and has a spectrum ranging from 100 keV up to 20 MeV. However, the measurement of these energetic gamma rays from nuclear reactions exceeds the capability of all existing medical imaging systems. An advanced Compton scattering detection method with electron tracking capability is proposed, and modeled to reconstruct the high-energy gamma-ray events. This Compton detection technique was initially developed to observe gamma rays for astrophysical purposes. A device illustrating the method was designed and adapted to Hadron Therapy Imaging (HTI). It consists of two main sub-systems: a tracker where Compton recoiled electrons are measured, and a calorimeter where the scattered gamma rays are absorbed via the photoelectric effect. Considering a hadron therapy scenario, the analysis of generated data was performed, passing trough the complete

  7. Research on governance path of rural settlements reconstruction patterns%农村居民点重构治理路径模式的研究

    Institute of Scientific and Technical Information of China (English)

    夏方舟; 严金明; 刘建生

    2014-01-01

    为补充并丰富国内对农村居民点重构治理路径模型的理论研究,针对不同重构模式探索治理路径的选择方向,提升农村居民点重构治理的实践绩效,该研究1)比较分析了不同农村居民点重构的差异特点,区分识别了当前农村居民点重构的3种典型模式:“拆迁型”、“建设型”和“保留型”;2)在简要回顾2个经典理论的基础上,构建了全新的治理路径分析框架:“光谱-阶梯治理路径”;3)针对不同重构模式选择了广西省柳州市3个典型案例对治理路径及绩效优劣进行比较研究。研究发现不同重构模式相对适宜于特定混合型重构治理路径,治理路径的选择应考虑主客观状况的匹配程度,充分体现政府、市场和民众的公共资源多中心治理。在当前中国的现实条件下,只有注重政府推动、市场配置与农户构想的有机统一,并在治理路径中额外加强政策支持、引导和监管,才能确保农民作为重构活动的主体地位,切实促进农村居民点重构的治理效果最优,充分实现土地利用的规模化和集约化。%At present, rural settlement reconstruction has already become an important way to balance urban and rural development, and an inevitable trend of modern rural development, shouldering the historical mission of ecological civilization construction. Promoting rural settlement reconstruction has become a social consensus. However, the performance of different patterns of rural settlement reconstruction is still subpar in practice. Therefore, the purpose of this article was to enrich the domestic study on the theory of the governance path of rural settlements reconstruction, and to explore choices and directions for the governance path on the basis of the recognition of different rural settlements reconstruction patterns, and to promote governance performance of rural settlement reconstruction

  8. Development of event reconstruction algorithm for full-body gamma-camera based on SiPMs

    Science.gov (United States)

    Philippov, D. E.; Belyaev, V. N.; Buzhan, P. Zh; Ilyin, A. L.; Popova, E. V.; Stifutkin, A. A.

    2016-02-01

    The gamma-camera is the detector for nuclear medical imaging where the photomultiplier tubes (PMTs) could be replaced by the silicon photomultipliers (SiPMs). Common systems have the energy resolution about 10% and intrinsic spatial resolution about 3 mm (FWHM). In order to achieve the requirement energy and spatial resolution the classical Anger's logic should be modified. In case of a standard monolithic thallium activated sodium iodide scintillator (500x400x10 mm3) and SiPM readout it could be done with identification of the clusters. We show that this approach has a good results with the simulated data.

  9. A new target reconstruction method considering atmospheric refraction

    Science.gov (United States)

    Zuo, Zhengrong; Yu, Lijuan

    2015-12-01

    In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.

  10. Contribution to the tracking and the 3D reconstruction of scenes composed of torus from image sequences a acquired by a moving camera; Contribution au suivi et a la reconstruction de scenes constituees d`objet toriques a partir de sequences d`images acquises par une camera mobile

    Energy Technology Data Exchange (ETDEWEB)

    Naudet, S

    1997-01-31

    The three-dimensional perception of the environment is often necessary for a robot to correctly perform its tasks. One solution, based on the dynamic vision, consists in analysing time-varying monocular images to estimate the spatial geometry of the scene. This thesis deals with the reconstruction of torus by dynamic vision. Though this object class is restrictive, it enables to tackle the problem of reconstruction of bent pipes usually encountered in industrial environments. The proposed method is based on the evolution of apparent contours of objects in the sequence. Using the expression of torus limb boundaries, it is possible to recursively estimate the object three-dimensional parameters by minimising the error between the predicted projected contours and the image contours. This process, which is performed by a Kalman filter, does not need a precise knowledge of the camera displacement or any matching of the tow limbs belonging to the same object. To complete this work, temporal tracking of objects which deals with occlusion situations is proposed. The approach consists in modeling and interpreting the apparent motion of objects in the successive images. The motion interpretation, based on a simplified representation of the scene, allows to recover pertinent three-dimensional information which is used to manage occlusion situations. Experiments, on synthetic and real images, proves he validity of the tracking and the reconstruction processes. (author) 127 refs.

  11. Reconstruction

    Directory of Open Access Journals (Sweden)

    Stefano Zurrida

    2011-01-01

    Full Text Available Breast cancer is the most common cancer in women. Primary treatment is surgery, with mastectomy as the main treatment for most of the twentieth century. However, over that time, the extent of the procedure varied, and less extensive mastectomies are employed today compared to those used in the past, as excessively mutilating procedures did not improve survival. Today, many women receive breast-conserving surgery, usually with radiotherapy to the residual breast, instead of mastectomy, as it has been shown to be as effective as mastectomy in early disease. The relatively new skin-sparing mastectomy, often with immediate breast reconstruction, improves aesthetic outcomes and is oncologically safe. Nipple-sparing mastectomy is newer and used increasingly, with better acceptance by patients, and again appears to be oncologically safe. Breast reconstruction is an important adjunct to mastectomy, as it has a positive psychological impact on the patient, contributing to improved quality of life.

  12. Research on the path Chinese professional sports reconstruction%中国职业体育的路径重建研究

    Institute of Scientific and Technical Information of China (English)

    王振亚; 刘苏

    2015-01-01

    The professional sports is a new stage of marketization, industrialization and professionalization of competitive sports, the most outstanding characteristic is through the market to seek living space. On which our country competitive sports develop-ment path does not guarantee its professional sports also have very good development. Professional sport demands the development path of inheritance andchange in competitive sports, the objective law should not only follow thepath dependence should also attach importance to the reconstruction ofthe path.%职业体育是竞技体育市场化、产业化和职业化的新阶段,最突出的特点是通过市场寻求生存空间。我国竞技体育赖以发展的路径不能保证职业体育也同样获得很好发展。职业体育要求对既往竞技体育发展路径做出继承与改变,既要遵循路径依赖的客观规律也要重视路径重建。

  13. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an

  14. How physics teachers approach innovation: An empirical study for reconstructing the appropriation path in the case of special relativity

    Directory of Open Access Journals (Sweden)

    Anna De Ambrosis

    2010-08-01

    Full Text Available This paper concerns an empirical study carried out with a group of high school physics teachers engaged in the Module on relativity of a Master course on the teaching of modern physics. The study is framed within the general research issue of how to promote innovation in school via teachers’ education and how to foster fruitful interactions between research and school practice via the construction of networks of researchers and teachers. In the paper, the problems related to innovation are addressed by focusing on the phase during which teachers analyze an innovative teaching proposal in the perspective of designing their own paths for the class work. The proposal analyzed in this study is Taylor and Wheeler’s approach for teaching special relativity. The paper aims to show that the roots of problems known in the research literature about teachers’ difficulties in coping with innovative proposals, and usually related to the implementation process, can be found and addressed already when teachers approach the proposal and try to appropriate it. The study is heuristic and has been carried out in order to trace the “appropriation path,” followed by the group of teachers, in terms of the main steps and factors triggering the progressive evolution of teachers’ attitudes and competences.

  15. 利用相机辅助信息的分组三维场景重建%Scene Reconstruction Based on View Clustering via Camera Auxiliary Information

    Institute of Scientific and Technical Information of China (English)

    郭复胜; 许华荣; 高伟; 胡占义

    2013-01-01

    将图像先分组重建,然后融合的方法,是解决大场景三维重建中规模问题的最有效的途径.无任何先验信息下的图像分组,不仅计算量大,而且很难取得有效的分组结果.对如何利用相机中一些精度不高甚至被人们所忽略的粗略辅助信息问题进行了研究,简化了大场景三维重建中快速、鲁棒的分组重建问题.首先借助辅助信息进行视图间的重叠度计算,并据此进行了视图间的分组,最后完成了视图组内重建和组间融合.在几组真实的图像上进行了实验测试,结果表明,借助辅助信息的分组重建方法相比基于图像检索的方法和Samantha方法,在效率和重建的鲁棒性上都有一定的优势.%One of the most efficient ways to tackle the scalability problem in large scene reconstruction is to break apart the scene into a number of sub-problems,then reconstruct each sub-problem independently,and merge the partial reconstructions finally.Image clustering without any camera or scene prior information is a difficult problem in 3D reconstruction.Image clustering is inherently time consuming,and generally no satisfactory results can be achieved.This paper explores how to use the auxiliary information of cameras which is non-accurate and neglected but available,and substantially simplifies the image clustering in 3D scene reconstruction.Firstly,the view-overlap is computed,then the view-overlap based clustering approach is proposed,and finally the clusters are independently reconstructed and merged.The experiments are done on several sets of real images,the results show that compared with the image retrieval based method and Samantha method,the scene reconstruction via camera auxiliary information performs satisfactorily in terms of efficiency,robustness and scalability.

  16. 高职院校治理结构重建的制度路径探寻%On Institutional Path of Reconstruction of Governance Structure in Higher Vocational Colleges

    Institute of Scientific and Technical Information of China (English)

    魏寒柏; 彭晓兰; 张海峰

    2016-01-01

    System problem is a problem that can’t be avoided in all organizational change. Reconstruction of governance structure in higher vocational colleges also has something with system problem . From the perspective of the new institutional economics,reconstruction of the governance structure of higher vocational colleges is the recon-struction of the system. This is important for clearing the responsibilities of the main body and proper interest claim, restricting the malignant behavior of individuals or small groups and achieving the organizational goals of higher voca-tional colleges. The necessity of exploring the institutional path to the reconstruction of the governance structure is mainly reflected in three aspects:breaking the“unit system”to explore the system path;changing“path dependence”to explore the system path;implementing the“cross border management”needs to explore the institutional path. The system path,the mechanism path and the cultural path are three concrete system paths for the reconstruction of the governance structure in higher vocational colleges.%制度问题是一切组织变革都不能回避的问题,高职院校治理结构重建同样绕不开制度问题。从新制度经济学的视角看,高职院校治理结构的重建就是制度的重建,这对明确各主体的职责和正当的利益诉求,约束个人或小团体的恶性逐利行为,实现高职院校的组织目标是极为重要的。探寻制度路径之于治理结构重建的必要性主要体现在三个方面:打破“单位制”需要探寻制度路径;改变“路径依赖”需要探寻制度路径;实现“跨界管理”需要探寻制度路径。体制路径、机制路径和文化路径,是高职院校治理结构重建的三个具体制度路径。

  17. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    Science.gov (United States)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  18. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  19. SU-E-J-141: Activity-Equivalent Path Length Approach for the 3D PET-Based Dose Reconstruction in Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Attili, A; Vignati, A; Giordanengo, S [Istituto Nazionale di Fisica Nucleare, Sez. Torino, Torino (Italy); Kraan, A [Istituto Nazionale di Fisica Nucleare, Sez. Pisa, Pisa (Italy); Universita degli Studi di Pisa, Pisa (Italy); Dalmasso, F [Istituto Nazionale di Fisica Nucleare, Sez. Torino, Torino (Italy); Universita degli Studi di Torino, Torino (Italy); Battistoni, G [Istituto Nazionale di Fisica Nucleare, Sez. Milano, Milano (Italy)

    2015-06-15

    Purpose: Ion beam therapy is sensitive to uncertainties from treatment planning and dose delivery. PET imaging of induced positron emitter distributions is a practical approach for in vivo, in situ verification of ion beam treatments. Treatment verification is usually done by comparing measured activity distributions with reference distributions, evaluated in nominal conditions. Although such comparisons give valuable information on treatment quality, a proper clinical evaluation of the treatment ultimately relies on the knowledge of the actual delivered dose. Analytical deconvolution methods relating activity and dose have been studied in this context, but were not clinically applied. In this work we present a feasibility study of an alternative approach for dose reconstruction from activity data, which is based on relating variations in accumulated activity to tissue density variations. Methods: First, reference distributions of dose and activity were calculated from the treatment plan and CT data. Then, the actual measured activity data were cumulatively matched with the reference activity distributions to obtain a set of activity-equivalent path lengths (AEPLs) along the rays of the pencil beams. Finally, these AEPLs were used to deform the original dose distribution, yielding the actual delivered dose. The method was tested by simulating a proton therapy treatment plan delivering 2 Gy on a homogeneous water phantom (the reference), which was compared with the same plan delivered on a phantom containing inhomogeneities. Activity and dose distributions were were calculated by means of the FLUKA Monte Carlo toolkit. Results: The main features of the observed dose distribution in the inhomogeneous situation were reproduced using the AEPL approach. Variations in particle range were reproduced and the positions, where these deviations originated, were properly identified. Conclusions: For a simple inhomogeneous phantom the 3D dose reconstruction from PET

  20. A Compton camera application for the GAMOS GEANT4-based framework

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J., E-mail: ljh@ns.ph.liv.ac.uk [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Arce, P. [Department of Basic Research, CIEMAT, Madrid (Spain); Judson, D.S.; Boston, A.J.; Boston, H.C.; Cresswell, J.R.; Dormand, J.; Jones, M.; Nolan, P.J.; Sampson, J.A.; Scraggs, D.P.; Sweeney, A. [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom)

    2012-04-11

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  1. Tower Camera

    Data.gov (United States)

    Oak Ridge National Laboratory — The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for...

  2. MACS-Himalaya: A photogrammetric aerial oblique camera system designed for highly accurate 3D-reconstruction and monitoring in steep terrain and under extreme illumination conditions

    Science.gov (United States)

    Brauchle, Joerg; Berger, Ralf; Hein, Daniel; Bucher, Tilman

    2017-04-01

    The DLR Institute of Optical Sensor Systems has developed the MACS-Himalaya, a custom built Modular Aerial Camera System specifically designed for the extreme geometric (steep slopes) and radiometric (high contrast) conditions of high mountain areas. It has an overall field of view of 116° across-track consisting of a nadir and two oblique looking RGB camera heads and a fourth nadir looking near-infrared camera. This design provides the capability to fly along narrow valleys and simultaneously cover ground and steep valley flank topography with similar ground resolution. To compensate for extreme contrasts between fresh snow and dark shadows in high altitudes a High Dynamic Range (HDR) mode was implemented, which typically takes a sequence of 3 images with graded integration times, each covering 12 bit radiometric depth, resulting in a total dynamic range of 15-16 bit. This enables dense image matching and interpretation for sunlit snow and glaciers as well as for dark shaded rock faces in the same scene. Small and lightweight industrial grade camera heads are used and operated at a rate of 3.3 frames per second with 3-step HDR, which is sufficient to achieve a longitudinal overlap of approximately 90% per exposure time at 1,000 m above ground at a velocity of 180 km/h. Direct georeferencing and multitemporal monitoring without the need of ground control points is possible due to the use of a high end GPS/INS system, a stable calibrated inner geometry of the camera heads and a fully photogrammetric workflow at DLR. In 2014 a survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried in a wingpod by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at altitudes up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced in regions and outcrops normally inaccessible to

  3. Cardiac cameras.

    Science.gov (United States)

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  4. The image camera of the 17 m diameter air Cherenkov telescope MAGIC

    CERN Document Server

    Ostankov, A P

    2001-01-01

    The image camera of the 17 m diameter MAGIC telescope, an air Cherenkov telescope currently under construction to be installed at the Canary island La Palma, is described. The main goal of the experiment is to cover the unexplored energy window from approx 10 to approx 300 GeV in gamma-ray astrophysics. In its first phase with a classical PMT camera the MAGIC telescope is expected to reach an energy threshold of approx 30 GeV. The operational conditions, the special characteristics of the developed PMTs and their use with light concentrators, the fast signal transfer scheme using analog optical links, the trigger and DAQ organization as well as image reconstruction strategy are described. The different paths being explored towards future camera improvements, in particular the constraints in using silicon avalanche photodiodes and GaAsP hybrid photodetectors in air Cherenkov telescopes are discussed.

  5. 基于BP神经网络和主元分析法的数码相机光谱重构算法%Spectral Reconstruction Algorithm of Digital Camera Based on BP Neural Network and Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    王勇; 陈梅

    2014-01-01

    从数码相机的RGB信号重构物体表面的光谱反射率是光谱颜色管理研究中的重要课题之一.提出了一种基于误差反向传播前馈神经网络(BP)和主元分析法(PCA)实现色卡的表面光谱反射率重构的新算法.通过对三种色卡进行光谱重构实验研究了BP神经网络的最优结构和主元数的最佳选择,验证了算法的精度.实验结果表明,采用适当的BP神经网络和主元分析相结合的新算法能够精确重构同类色卡的表面光谱反射率.%Reconstructing the spectral reflectence of the object surface from RGB signals of digital camera is one of the important studies of spectral color managament.A new algorithm based on back propagation (BP) neural network and principal component analysis (PCA) is proposed to realize the spectral reflectence reconstruction of color atlas.The optimal structure of BP neural network and the number of principal components are studied in the spectral reflectence reconstruction experiments of three color atlases and the accuracy of the algorithm is also testified.The experimental results show that the new algorithm of appropriate BP neural network combined with PCA is satisfied to accurately reconstruct the spectral reflectence of the same kind of color atlas.

  6. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  7. Holographic motion picture camera with Doppler shift compensation

    Science.gov (United States)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  8. Reduction in camera-specific variability in [{sup 123}I]FP-CIT SPECT outcome measures by image reconstruction optimized for multisite settings: impact on age-dependence of the specific binding ratio in the ENC-DAT database of healthy controls

    Energy Technology Data Exchange (ETDEWEB)

    Buchert, Ralph; Lange, Catharina [Charite - Universitaetsmedizin Berlin, Department of Nuclear Medicine, Berlin (Germany); Kluge, Andreas; Bronzel, Marcus [ABX-CRO advanced pharmaceutical services Forschungsgesellschaft m.b.H., Dresden (Germany); Tossici-Bolt, Livia [University Hospital Southampton NHS Foundation Trust, Department of Medical Physics, Southampton (United Kingdom); Dickson, John [University College London Hospital NHS Foundation Trust, Institute of Nuclear Medicine, London (United Kingdom); Asenbaum, Susanne [Medical University of Vienna, Department of Nuclear Medicine, Vienna (Austria); Booij, Jan [University of Amsterdam, Department of Nuclear Medicine, Academic Medical Centre, Amsterdam (Netherlands); Kapucu, L. Oezlem Atay [Gazi University, Department of Nuclear Medicine, Faculty of Medicine, Ankara (Turkey); Svarer, Claus [Rigshospitalet and University of Copenhagen, Neurobiology Research Unit, Copenhagen (Denmark); Koulibaly, Pierre-Malick [University of Nice-Sophia Antipolis, Nuclear Medicine Department, Centre Antoine Lacassagne, Nice (France); Nobili, Flavio [University of Genoa, Department of Neuroscience (DINOGMI), Clinical Neurology Unit, Genoa (Italy); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Sabri, Osama [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Sera, Terez [University of Szeged, Department of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Tatsch, Klaus [Municipal Hospital of Karlsruhe Inc, Department of Nuclear Medicine, Karlsruhe (Germany); Borght, Thierry vander [CHU Namur, IREC, Nuclear Medicine Division, Universite catholique de Louvain, Yvoir (Belgium); Laere, Koen van [University Hospital and K.U. Leuven, Nuclear Medicine, Leuven (Belgium); Varrone, Andrea [Karolinska University Hospital, Department of Clinical Neuroscience, Centre for Psychiatry Research, Karolinska Institutet, Stockholm (Sweden); Iida, Hidehiro [National Cerebral and Cardiovascular Center - Research Institute, Osaka (Japan)

    2016-07-15

    Quantitative estimates of dopamine transporter availability, determined with [{sup 123}I]FP-CIT SPECT, depend on the SPECT equipment, including both hardware and (reconstruction) software, which limits their use in multicentre research and clinical routine. This study tested a dedicated reconstruction algorithm for its ability to reduce camera-specific intersubject variability in [{sup 123}I]FP-CIT SPECT. The secondary aim was to evaluate binding in whole brain (excluding striatum) as a reference for quantitative analysis. Of 73 healthy subjects from the European Normal Control Database of [{sup 123}I]FP-CIT recruited at six centres, 70 aged between 20 and 82 years were included. SPECT images were reconstructed using the QSPECT software package which provides fully automated detection of the outer contour of the head, camera-specific correction for scatter and septal penetration by transmission-dependent convolution subtraction, iterative OSEM reconstruction including attenuation correction, and camera-specific ''to kBq/ml'' calibration. LINK and HERMES reconstruction were used for head-to-head comparison. The specific striatal [{sup 123}I]FP-CIT binding ratio (SBR) was computed using the Southampton method with binding in the whole brain, occipital cortex or cerebellum as the reference. The correlation between SBR and age was used as the primary quality measure. The fraction of SBR variability explained by age was highest (1) with QSPECT, independently of the reference region, and (2) with whole brain as the reference, independently of the reconstruction algorithm. QSPECT reconstruction appears to be useful for reduction of camera-specific intersubject variability of [{sup 123}I]FP-CIT SPECT in multisite and single-site multicamera settings. Whole brain excluding striatal binding as the reference provides more stable quantitative estimates than occipital or cerebellar binding. (orig.)

  9. HIGH SPEED KERR CELL FRAMING CAMERA

    Science.gov (United States)

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  10. Path Dependency

    OpenAIRE

    Mark Setterfield

    2015-01-01

    Path dependency is defined, and three different specific concepts of path dependency – cumulative causation, lock in, and hysteresis – are analyzed. The relationships between path dependency and equilibrium, and path dependency and fundamental uncertainty are also discussed. Finally, a typology of dynamical systems is developed to clarify these relationships.

  11. Novel double path shearing interferometer in corneal topography measurements

    Science.gov (United States)

    Licznerski, Tomasz J.; Jaronski, Jaroslaw; Kosz, Dariusz

    2005-09-01

    The paper presents an approach for measurements of corneal topography by use of a patent pending double path shearing interferometer (DPSI). Laser light reflected from the surface of the cornea is divided and directed to the inputs of two interferometers. The interferometers use lateral shearing of wavefronts in two orthogonal directions. A tilt of one of the mirrors in each interferometric setup perpendicularly to the lateral shear introduces parallel carrier frequency fringes at the output of each interferometer. There is orthogonal linear polarization of the laser light used in two DPSI. Two images of fringe patters are recorded by a high resolution digital camera. The obtained fringe patterns are used for phase difference reconstruction. The phase of the wavefront was reconstructed by use of algorithms for a large grid based on discrete integration. The in vivo method can also be used for tear film stability measurement, artificial tears and contact lens tests.

  12. Smooth Reconstruction

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Eighty percent of the reconstruction projects in Sichuan Province will be completed by the end of the year Despite ruins still seen everywhere in the earthquake-hit areas in Sichuan (Province, new buildings have been completed, and many people have moved into new houses. Through cameras of the media, the faces, once painful and melancholy after last year’s earthquake, now look confident and firm, gratifying people all over the

  13. Calibration method for a central catadioptric-perspective camera system.

    Science.gov (United States)

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  14. SPEIR: A Ge Compton Camera

    Energy Technology Data Exchange (ETDEWEB)

    Mihailescu, L; Vetter, K M; Burks, M T; Hull, E L; Craig, W W

    2004-02-11

    The SPEctroscopic Imager for {gamma}-Rays (SPEIR) is a new concept of a compact {gamma}-ray imaging system of high efficiency and spectroscopic resolution with a 4-{pi} field-of-view. The system behind this concept employs double-sided segmented planar Ge detectors accompanied by the use of list-mode photon reconstruction methods to create a sensitive, compact Compton scatter camera.

  15. High-resolution light field reconstruction using a hybrid imaging system.

    Science.gov (United States)

    Wang, Xiang; Li, Lin; Hou, GuangQi

    2016-04-01

    Recently, light field cameras have drawn much attraction for their innovative performance in photographic and scientific applications. However, narrow baselines and constrained spatial resolution of current light field cameras impose restrictions on their usability. Therefore, we design a hybrid imaging system containing a light field camera and a high-resolution digital single lens reflex camera, and these two kinds of cameras share the same optical path with a beam splitter so as to achieve the reconstruction of high-resolution light fields. The high-resolution 4D light fields are reconstructed with a phase-based perspective variation strategy. First, we apply complex steerable pyramid decomposition on the high-resolution image from the digital single lens reflex camera. Then, we perform phase-based perspective-shift processing with the disparity value, which is extracted from the upsampled light field depth map, to create high-resolution synthetic light field images. High-resolution digital refocused images and high-resolution depth maps can be generated in this way. Furthermore, controlling the magnitude of the perspective shift enables us to change the depth of field rendering in the digital refocused images. We show several experimental results to demonstrate the effectiveness of our approach.

  16. Phase-Space Reconstruction: a Path Towards the Next Generation of Nonlinear Differential Equation Based Models and Its Implications Towards Non-Uniform Sampling Theory

    Energy Technology Data Exchange (ETDEWEB)

    Charles R. Tolle; Mark Pengitore

    2009-08-01

    This paper explores the overlaps between the Control community’s work on System Identification (SysID) and the Physics, Mathematics, Chaos, and Complexity communities’ work on phase-space reconstruction via time-delay embedding. There are numerous overlaps between the goals of each community. Nevertheless, the Controls community can gain new insight as well as some new very powerful tools for SysID from the latest developments within the Physics, Mathematics, Chaos, and Complexity communities. These insights are gained via the work on phase-space reconstruction of non-linear dynamics. New methods for discovering non-linear differential based equations that evolved from embedding operations can shed new light on hybrid-systems theory, Nyquest-Shannon’s Theories, and network based control theory. This paper strives to guide the Controls community towards a closer inspection of the tools and additional insights being developed within the Physics, Mathematics, Chaos, and Complexity communities for discovery of system dynamics, the first step in control system development. The paper introduces the concepts of phase-space reconstruction via time-delay embedding (made famous byWhitney, Takens, and Sauer’s Thoreoms), intergrate-and-fire embedding, and non-linear differential equation discovery based on Perona’s method.

  17. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    Science.gov (United States)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  18. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. Analyzing storage media of digital camera

    OpenAIRE

    Chow, KP; Tse, KWH; Law, FYW; Ieong, RSC; Kwan, MYK; Tse, H.; Lai, PKY

    2009-01-01

    Digital photography has become popular in recent years. Photographs have become common tools for people to record every tiny parts of their daily life. By analyzing the storage media of a digital camera, crime investigators may extract a lot of useful information to reconstruct the events. In this work, we will discuss a few approaches in analyzing these kinds of storage media of digital cameras. A hypothetical crime case will be used as case study for demonstration of concepts. © 2009 IEEE.

  20. Camera calibration for multidirectional flame chemiluminescence tomography

    Science.gov (United States)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  1. Culminating paths

    Directory of Open Access Journals (Sweden)

    Mireille Bousquet-Mélou

    2008-04-01

    Full Text Available Let a and b be two positive integers. A culminating path is a path of ℤ 2 that starts from (0,0, consists of steps (1,a and (1,-b, stays above the x-axis and ends at the highest ordinate it ever reaches. These paths were first encountered in bioinformatics, in the analysis of similarity search algorithms. They are also related to certain models of Lorentzian gravity in theoretical physics. We first show that the language on a two letter alphabet that naturally encodes culminating paths is not context-free. Then, we focus on the enumeration of culminating paths. A step by step approach, combined with the kernel method, provides a closed form expression for the generating function of culminating paths ending at a (generic height k. In the case a = b, we derive from this expression the asymptotic behaviour of the number of culminating paths of length n. When a > b, we obtain the asymptotic behaviour by a simpler argument. When a < b, we only determine the exponential growth of the number of culminating paths. Finally, we study the uniform random generation of culminating paths via various methods. The rejection approach, coupled with a symmetry argument, gives an algorithm that is linear when a ≥ b, with no precomputation stage nor non-linear storage required. The choice of the best algorithm is not as clear when a < b. An elementary recursive approach yields a linear algorithm after a precomputation stage involving O (n 3 arithmetic operations, but we also present some alternatives that may be more efficient in practice.

  2. Reconstruction and Comparison of P-T Paths in the Andrelândia Nappe System, Southern Brasília Fold Belt, MG

    Directory of Open Access Journals (Sweden)

    Rafael Gonçalves da Motta

    2010-10-01

    Full Text Available The Andrelândia Nappe System consists of three main nappes, from bottom to top: the lower Andrelândia nappe, themiddle Liberdade nappe and the upper Três Pontas-Varginha nappe and associated klippen, Pouso Alto, Aiuruoca, Carvalhosand Serra da Natureza. In the Andrelândia Nappe System, metamorphism increases from north to south and east to west, withthe highest temperatures and pressures recorded in rocks of the Três Pontas-Varginha nappe and associated klippen. Samplesof pelitic and mafic rocks were selected from the three nappes to determine the conditions of metamorphism using the programThermocalc. In this study, peak metamorphic conditions were calculated for the following samples: one sample of theAndrelândia Nappe (688 ± 35 °C and 5.63 ± 0.9 kbar, one sample of the Liberdade Nappe (648 ± 23°C and 7.41 ± 1 kbar, andthree samples of the Carvalhos Klippe (845 ± 53ºC and 15.7 ± 5.2 kbar, and 847 ± 45oC and 13.6 ± 5.8 kbar for two samplesof pelitic granulites respectively, and 854 ± 71ºC and 15 ± 1.4 kbar for one sample of mafic granulite. P-T paths inferred onthe basis of the observed textures are clockwise and typical of collisional belts.

  3. Path Sensitization

    Institute of Scientific and Technical Information of China (English)

    赵著行; 闵应骅; 等

    1997-01-01

    For different delay models,the concept of sensitization can be very different.Traditonal concepts of sensitization cannot precisely describe circuit behavior when the input vectors change very fast.Using Boolean process aporoach,this paper presents a new definition of sensitization for arbitrary input waveforms.By this new concept it is found that if the inputs of a combinational circuit can change at any time,and each gate's delay varies within an interval (bounded gate delay model),then every path,which is not necessarily a single topological path,is sensitizable.From the experimental results it can be seen that,all nonsensitizable paths for traditional concepts actually can propagate transitions along them for some input waveforms.However,specified time between input transitions(STBIT) and minimum permissible pulse width(ε)are two major factors to make some paths non-sensitizable.

  4. Depth Estimation Using a Sliding Camera.

    Science.gov (United States)

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  5. Robust 4 Camera 3D Synthetic Aperture PIV

    Science.gov (United States)

    Bajpayee, Abhishek; Techet, Alexandra

    2016-11-01

    We present novel processing techniques which allow for robust 4 camera 3D synthetic aperture (SA) PIV. These pre and post processing techniques, applied to raw images and reconstructed volumes, significantly improve SA reconstruction SNR values and consequently allow for accurate SAPIV velocity fields. SA, or light field, PIV has typically required 8 or 9 cameras in order to achieve high reconstruction quality and velocity field reconstruction quality values, Q and Qv respectively. This is primarily because the effective signal to noise ratio (SNR) of refocused images, when using traditional multiplicative or additive refocusing techniques, increases with the number of cameras being used. However, tomographic reconstruction (used with TomoPIV), is able to achieve relatively high SNR reconstructions using 4 or 5 cameras owing to its iterative but significantly more computationally expensive algorithm. Our processing techniques facilitate better recovery of relevant information in SA reconstructions using only 4 views. As a result, we no longer have to trade setup cost and complexity (number of cameras) for computational speed of the reconstruction algorithm.

  6. Path Planning Control for Mobile Robot

    Directory of Open Access Journals (Sweden)

    Amenah A.H. Salih

    2011-01-01

    Full Text Available Autonomous motion planning is important area of robotics research. This type of planning relieves human operator from tedious job of motion planning. This reduces the possibility of human error and increase efficiency of whole process. This research presents a new algorithm to plan path for autonomous mobile robot based on image processing techniques by using wireless camera that provides the desired image for the unknown environment . The proposed algorithm is applied on this image to obtain a optimal path for the robot. It is based on the observation and analysis of the obstacles that lying in the straight path between the start and the goal point by detecting these obstacles, analyzing and studying their shapes, positions and points of intersection with the straight path to find the nearly optimal path which connects the start and the goal point.This work has theoretical part and experimental part. The theoretical part includes building a MATLAB program which is applied to environment image to find the nearly optimal path .MATLAB - C++.NET interface is accomplished then to supply the path information for C++.NET program which is done for programming the pioneer mobile robot to achieve the desired path. The experimental part includes using wireless camera that takes an image for the environment and send it to the computer which processes this image and sends ( by wireless connection the resulted path information to robot which programmed in C++.NET program to walk according to this path.So, the overall system can be represented by:Wireless camera – computer – wireless connection for the mobile robot .The experimental work including some experiments shows that the developed mobile robot (pioneer p3-dx travels successfully from the start point and reach the goal point across the optimal path (according to time and power which is obtained as result of the proposed path planning algorithm introduced in this paper.

  7. Interconnected network of cameras

    Science.gov (United States)

    Hosseini Kamal, Mahdad; Afshari, Hossein; Leblebici, Yusuf; Schmid, Alexandre; Vandergheynst, Pierre

    2013-02-01

    The real-time development of multi-camera systems is a great challenge. Synchronization and large data rates of the cameras adds to the complexity of these systems as well. The complexity of such system also increases as the number of their incorporating cameras increases. The customary approach to implementation of such system is a central type, where all the raw stream from the camera are first stored then processed for their target application. An alternative approach is to embed smart cameras to these systems instead of ordinary cameras with limited or no processing capability. Smart cameras with intra and inter camera processing capability and programmability at the software and hardware level will offer the right platform for distributed and parallel processing for multi- camera systems real-time application development. Inter camera processing requires the interconnection of smart cameras in a network arrangement. A novel hardware emulating platform is introduced for demonstrating the concept of the interconnected network of cameras. A methodology is demonstrated for the interconnection network of camera construction and analysis. A sample application is developed and demonstrated.

  8. Integrating Scene Parallelism in Camera Auto-Calibration

    Institute of Scientific and Technical Information of China (English)

    LIU Yong (刘勇); WU ChengKe (吴成柯); Hung-Tat Tsui

    2003-01-01

    This paper presents an approach for camera auto-calibration from uncalibrated video sequences taken by a hand-held camera. The novelty of this approach lies in that the line parallelism is transformed to the constraints on the absolute quadric during camera autocalibration. This makes some critical cases solvable and the reconstruction more Euclidean. The approach is implemented and validated using simulated data and real image data. The experimental results show the effectiveness of the approach.

  9. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  10. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  11. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  12. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  13. Lag Camera: A Moving Multi-Camera Array for Scene-Acquisition

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2007-04-01

    Full Text Available Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional (3Dmodel of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

  14. A Survey of Catadioptric Omnidirectional Camera Calibration

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2013-02-01

    Full Text Available For dozen years, computer vision becomes more popular, in which omnidirectional camera has a larger field of view and widely been used in many fields, such as: robot navigation, visual surveillance, virtual reality, three-dimensional reconstruction, and so on. Camera calibration is an essential step to obtain three-dimensional geometric information from a two-dimensional image. Meanwhile, the omnidirectional camera image has catadioptric distortion, which need to be corrected in many applications, thus the study of such camera calibration method has important theoretical significance and practical applications. This paper firstly introduces the research status of catadioptric omnidirectional imaging system; then the image formation process of catadioptric omnidirectional imaging system has been given; finally a simple classification of omnidirectional imaging method is given, and we discussed the advantages and disadvantages of these methods.

  15. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  16. Evaluating intensified camera systems

    Energy Technology Data Exchange (ETDEWEB)

    S. A. Baker

    2000-07-01

    This paper describes image evaluation techniques used to standardize camera system characterizations. Key areas of performance include resolution, noise, and sensitivity. This team has developed a set of analysis tools, in the form of image processing software used to evaluate camera calibration data, to aid an experimenter in measuring a set of camera performance metrics. These performance metrics identify capabilities and limitations of the camera system, while establishing a means for comparing camera systems. Analysis software is used to evaluate digital camera images recorded with charge-coupled device (CCD) cameras. Several types of intensified camera systems are used in the high-speed imaging field. Electro-optical components are used to provide precise shuttering or optical gain for a camera system. These components including microchannel plate or proximity focused diode image intensifiers, electro-static image tubes, or electron-bombarded CCDs affect system performance. It is important to quantify camera system performance in order to qualify a system as meeting experimental requirements. The camera evaluation tool is designed to provide side-by-side camera comparison and system modeling information.

  17. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  18. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  19. Path Dependence

    DEFF Research Database (Denmark)

    Madsen, Mogens Ove

    Begrebet Path Dependence blev oprindelig udviklet inden for New Institutionel Economics af bl.a. David, Arthur og North. Begrebet har spredt sig vidt i samfundsvidenskaberne og undergået en udvikling. Dette paper propagerer for at der er sket så en så omfattende udvikling af begrebet, at man nu kan...... tale om 1. og 2. generation af Path Dependence begrebet. Den nyeste udvikling af begrebet har relevans for metodologi-diskusionerne i relation til Keynes...

  20. Flight Path Measurement During Takeoff and Landing Phase Based-on High Speed Camera Array%基于高速像机阵列的起飞着陆段航迹测量技术

    Institute of Scientific and Technical Information of China (English)

    张杰; 冯巧宁

    2013-01-01

    The sampling frequency and accuracy of GPS information are very low.To get higher frequency and higher precision of location and speed parameter,the motion process of the flight is taken by the high-speed camera array during the takeoff and landing phase.Through analyzing the image sequence,the position,velocity can be solved by digital image processing and 3-dimentional linear transformation.This method can get the position and velocity information of aircraft by photos.And it can be a reference in aircraft trajectory measurement.%针对GPS采样频率低、动态精度差的问题,为了在飞机起降性能测试课题中获取更高频率、更高精度的位置、速度等参数信息,利用高速像机阵列对飞机的起飞着陆运动过程进行接力拍摄,通过对各站点高速像机拍摄到的图像序列分析处理,利用数字图像处理技术、近景数字摄影测量三维直接线性变换(DLT)等算法,解算出了飞机的位置及速度数据.该方法对其他类似飞行物的轨迹测量具有指导意义.

  1. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [University of Alaska--Fairbanks; Bailey, J [University of Alaska--Fairbanks

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  2. Path Creation

    DEFF Research Database (Denmark)

    Karnøe, Peter; Garud, Raghu

    2012-01-01

    This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts. Competenc......This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts....... Competencies emerged through processes and mechanisms such as co-creation that implicated multiple learning processes. The process was not an orderly linear one as emergent contingencies influenced the learning processes. An implication is that public policy to catalyse clusters cannot be based...

  3. Microchannel plate streak camera

    Science.gov (United States)

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  4. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  5. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  6. Ringfield lithographic camera

    Science.gov (United States)

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  7. Single camera stereo using structure from motion

    Science.gov (United States)

    McBride, Jonah; Snorrason, Magnus; Goodsell, Thomas; Eaton, Ross; Stevens, Mark R.

    2005-05-01

    Mobile robot designers frequently look to computer vision to solve navigation, obstacle avoidance, and object detection problems such as those encountered in parking lot surveillance. Stereo reconstruction is a useful technique in this domain and can be done in two ways. The first requires a fixed stereo camera rig to provide two side-by-side images; the second uses a single camera in motion to provide the images. While stereo rigs can be accurately calibrated in advance, they rely on a fixed baseline distance between the two cameras. The advantage of a single-camera method is the flexibility to change the baseline distance to best match each scenario. This directly increases the robustness of the stereo algorithm and increases the effective range of the system. The challenge comes from accurately rectifying the images into an ideal stereo pair. Structure from motion (SFM) can be used to compute the camera motion between the two images, but its accuracy is limited and small errors can cause rectified images to be misaligned. We present a single-camera stereo system that incorporates a Levenberg-Marquardt minimization of rectification parameters to bring the rectified images into alignment.

  8. CCD Luminescence Camera

    Science.gov (United States)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  9. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... to establish analysis as a continued, iterative movement of transcultural dialogue and critique....

  10. Camera Operator and Videographer

    Science.gov (United States)

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  11. Ice and thermal cameras for stream flow observations

    Science.gov (United States)

    Tauro, Flavia; Petroselli, Andrea; Grimaldi, Salvatore

    2016-04-01

    Flow measurements are instrumental to establish discharge rating curves and to enable flood risk forecast. Further, they are crucial to study erosion dynamics and to comprehend the organization of drainage networks in natural catchments. Flow observations are typically executed with intrusive instrumentation, such as current meters or acoustic devices. Alternatively, non-intrusive instruments, such as radars and microwave sensors, are applied to estimate surface velocity. Both approaches enable flow measurements over areas of limited extent, and their implementation can be costly. Optical methods, such as large scale particle image velocimetry, have proved beneficial for non-intrusive and spatially-distributed environmental monitoring. In this work, a novel optical-based approach is utilized for surface flow velocity observations based on the combined use of a thermal camera and ice dices. Different from RGB imagery, thermal images are relatively unaffected by illumination conditions and water reflections. Therefore, such high-quality images allow to readily identify and track tracers against the background. Further, the optimal environmental compatibility of ice dices and their relative ease of preparation and storage suggest that the technique can be easily implemented to rapidly characterize surface flows. To demonstrate the validity of the approach, we present a set of experiments performed on the Brenta stream, Italy. In the experimental setup, the axis of the camera is maintained perpendicular with respect to the water surface to circumvent image orthorectification through ground reference points. Small amounts of ice dices are deployed onto the stream water surface during image acquisition. Particle tracers' trajectories are reconstructed off-line by analyzing thermal images with a particle tracking velocimetry (PTV) algorithm. Given the optimal visibility of the tracers and their low seeding density, PTV allows for efficiently following tracers' paths in

  12. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  13. Dry imaging cameras

    Directory of Open Access Journals (Sweden)

    I K Indrajit

    2011-01-01

    Full Text Available Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  14. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  15. Radiometric calibration for MWIR cameras

    Science.gov (United States)

    Yang, Hyunjin; Chun, Joohwan; Seo, Doo Chun; Yang, Jiyeon

    2012-06-01

    Korean Multi-purpose Satellite-3A (KOMPSAT-3A), which weighing about 1,000 kg is scheduled to be launched in 2013 and will be located at a sun-synchronous orbit (SSO) of 530 km in altitude. This is Korea's rst satellite to orbit with a mid-wave infrared (MWIR) image sensor, which is currently being developed at Korea Aerospace Research Institute (KARI). The missions envisioned include forest re surveillance, measurement of the ocean surface temperature, national defense and crop harvest estimate. In this paper, we shall explain the MWIR scene generation software and atmospheric compensation techniques for the infrared (IR) camera that we are currently developing. The MWIR scene generation software we have developed taking into account sky thermal emission, path emission, target emission, sky solar scattering and ground re ection based on MODTRAN data. Here, this software will be used for generating the radiation image in the satellite camera which requires an atmospheric compensation algorithm and the validation of the accuracy of the temperature which is obtained in our result. Image visibility restoration algorithm is a method for removing the eect of atmosphere between the camera and an object. This algorithm works between the satellite and the Earth, to predict object temperature noised with the Earth's atmosphere and solar radiation. Commonly, to compensate for the atmospheric eect, some softwares like MODTRAN is used for modeling the atmosphere. Our algorithm doesn't require an additional software to obtain the surface temperature. However, it needs to adjust visibility restoration parameters and the precision of the result still should be studied.

  16. Discrete algebraic reconstruction technique: a new approach for superresolution reconstruction of license plates

    Science.gov (United States)

    Zarei Zefreh, Karim; van Aarle, Wim; Batenburg, K. Joost; Sijbers, Jan

    2013-10-01

    A new superresolution algorithm is proposed to reconstruct a high-resolution license plate image from a set of low-resolution camera images. The reconstruction methodology is based on the discrete algebraic reconstruction technique (DART), a recently developed reconstruction method. While DART has already been successfully applied in tomographic imaging, it has not yet been transferred to the field of camera imaging. DART is introduced for camera imaging through a demonstration of how prior knowledge of the colors of the license plate can be directly exploited during the reconstruction of a high-resolution image from a set of low-resolution images. Simulation experiments show that DART can reconstruct images with superior quality compared to conventional reconstruction methods.

  17. A method for selecting training samples based on camera response

    Science.gov (United States)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  18. Structured light optical microscopy for three-dimensional reconstruction of technical surfaces

    Science.gov (United States)

    Kettel, Johannes; Reinecke, Holger; Müller, Claas

    2016-04-01

    In microsystems technology quality control of micro structured surfaces with different surface properties is playing an ever more important role. The process of quality control incorporates three-dimensional (3D) reconstruction of specularand diffusive reflecting technical surfaces. Due to the demand on high measurement accuracy and data acquisition rates, structured light optical microscopy has become a valuable solution to solve this problem providing high vertical and lateral resolution. However, 3D reconstruction of specular reflecting technical surfaces still remains a challenge to optical measurement principles. In this paper we present a measurement principle based on structured light optical microscopy which enables 3D reconstruction of specular- and diffusive reflecting technical surfaces. It is realized using two light paths of a stereo microscope equipped with different magnification levels. The right optical path of the stereo microscope is used to project structured light onto the object surface. The left optical path is used to capture the structured illuminated object surface with a camera. Structured light patterns are generated by a Digital Light Processing (DLP) device in combination with a high power Light Emitting Diode (LED). Structured light patterns are realized as a matrix of discrete light spots to illuminate defined areas on the object surface. The introduced measurement principle is based on multiple and parallel processed point measurements. Analysis of the measured Point Spread Function (PSF) by pattern recognition and model fitting algorithms enables the precise calculation of 3D coordinates. Using exemplary technical surfaces we demonstrate the successful application of our measurement principle.

  19. Inversion of signature for paths of bounded variation

    CERN Document Server

    Lyons, Terry

    2011-01-01

    We develop two methods to reconstruct a path of bounded variation from its signature. The first method gives a simple and explicit expression of any axis path in terms of its signature, but it does not apply directlty to more general ones. The second method, based on an approximation scheme, recovers any tree-reduced path from its signature as the limit of a uniformly convergent sequence of lattice paths.

  20. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  1. Do Speed Cameras Reduce Collisions?

    OpenAIRE

    Skubic, Jeffrey; Johnson, Steven B.; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods – before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not indepe...

  2. Do speed cameras reduce collisions?

    Science.gov (United States)

    Skubic, Jeffrey; Johnson, Steven B; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods - before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions.

  3. A distributed topological camera network representation for tracking applications.

    Science.gov (United States)

    Lobaton, Edgar; Vasudevan, Ramanarayan; Bajcsy, Ruzena; Sastry, Shankar

    2010-10-01

    Sensor networks have been widely used for surveillance, monitoring, and tracking. Camera networks, in particular, provide a large amount of information that has traditionally been processed in a centralized manner employing a priori knowledge of camera location and of the physical layout of the environment. Unfortunately, these conventional requirements are far too demanding for ad-hoc distributed networks. In this article, we present a simplicial representation of a camera network called the camera network complex ( CN-complex), that accurately captures topological information about the visual coverage of the network. This representation provides a coordinate-free calibration of the sensor network and demands no localization of the cameras or objects in the environment. A distributed, robust algorithm, validated via two experimental setups, is presented for the construction of the representation using only binary detection information. We demonstrate the utility of this representation in capturing holes in the coverage, performing tracking of agents, and identifying homotopic paths.

  4. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  5. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  6. LSST camera heat requirements using CFD and thermal seeing modeling

    Science.gov (United States)

    Sebag, Jacques; Vogiatzis, Konstantinos

    2010-07-01

    The LSST camera is located above the LSST primary/tertiary mirror and in front of the secondary mirror in the shadow of its central obscuration. Due to this position within the optical path, heat released from the camera has a potential impact on the seeing degradation that is larger than traditionally estimated for Cassegrain or Nasmyth telescope configurations. This paper presents the results of thermal seeing modeling combined with Computational Fluid Dynamics (CFD) analyzes to define the thermal requirements on the LSST camera. Camera power output fluxes are applied to the CFD model as boundary conditions to calculate the steady-state temperature distribution on the camera and the air inside the enclosure. Using a previously presented post-processing analysis to calculate the optical seeing based on the mechanical turbulence and temperature variations along the optical path, the optical performance resulting from the seeing is determined. The CFD simulations are repeated for different wind speeds and orientations to identify the worst case scenario and generate an estimate of seeing contribution as a function of camera-air temperature difference. Finally, after comparing with the corresponding error budget term, a maximum allowable temperature for the camera is selected.

  7. TOUCHSCREEN USING WEB CAMERA

    Directory of Open Access Journals (Sweden)

    Kuntal B. Adak

    2015-10-01

    Full Text Available In this paper we present a web camera based touchscreen system which uses a simple technique to detect and locate finger. We have used a camera and regular screen to achieve our goal. By capturing the video and calculating position of finger on the screen, we can determine the touch position and do some function on that location. Our method is very easy and simple to implement. Even our system requirement is less expensive compare to other techniques.

  8. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... such as the circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  9. Volumetric particle image velocimetry with a single plenoptic camera

    Science.gov (United States)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera

  10. 17世纪日本2次重大台风事件的路径重建%Reconstruction of 2 signiifcant typhoon paths in the 17th century Japan

    Institute of Scientific and Technical Information of China (English)

    小林雄河; 潘威

    2014-01-01

    Western North Paciifc (WNP), especially China and Japan included, is a typhoon-prone area. Change of typhoon tracks in this area would raise acute influence. There are almost no quantitative meteorological data existed in Japan before Meiji Restoration, typhoon events can only be rebuilt with historical documents. There are a limited number of historical documents in the 17th century in Japan, author also used typhoon records in the modern instrumental observation period at the same time of using historical documents and speculated the historical typhoon tracks and its size. In the context of global climate change, there is currently no consensus whether typhoon trajectory in the East Asia will be changes. Through the reconstruction of typhoon tracks on the history, we can have in-depth discussion of this issue. Author has estimated typhoon track inlfuenced on Japan on 11 Sep., AD 1650 and 15 to 17 Sep., AD 1674. The main disaster-causing factor of the typhoon inlfuenced on Japan on 11 Sep., AD 1650 was the tides;the worst-hit area was the northern coast of the Ariake-kai bay. The typhoon paths inlfuenced on Japan on 17—18 Sep., AD 1828, 25 Aug., AD 1914 and 27 Sep., AD 1991 similar to this typhoon, those typhoon also caused tidal disaster in the Ariake-kai. Typhoons inlfuenced on the Ariake-kai frequently from now Yamaguchi Prefecture into the sea, this typhoon also about 3~5 hours after landed into the Sea of Japan. The effect area of the typhoon inlfuenced on Japan on 15 to 17 Sep., AD 1674 was wide, much of the Western half of Japan and the Eastern half which of nearly the typhoon track had caused wind damage;some places of the Eastern half of Japan also had caused lfood damage. This is typical disaster situation of typhoon in Japan. This typhoon from the Northern Kyushu region to the Hokuriku region took about 1day. On the basis of reconstruction of the typhoon case study, author make clear that the existing compilation data were embodied incompletely and

  11. A cosmic ray muon going through CMS with the magnet at full field. The line shows the path of the muon reconstructed from information recorded in the various detectors.

    CERN Multimedia

    Ianna, Osborne

    2007-01-01

    The event display of the event 3981 from the MTCC run 2605. The data has been taken with a magnetic field of 3.8 T. A detailed model of the magnetic field corresponding to 4T is shown as a color gradient from 4T in the center (red) to 0 T outside of the detector (blue). The cosmic muon has been detected by all four detectors participating in the run: the drift tubes, the HCAL, the tracker and the ECAL subdetectors and it has been reconstructed online. The event display shows the reconstructed 4D segments in the drift tubes (magenta), the reconstructed hits in HCAL (blue), the locally reconstructed track in the tracker (green), the uncalibrated rec hits in ECAL (light green). A muon track was reconstructed in the drift tubes and extrapolated back into the detector taking the magnetic field into account (green).

  12. Uncooled radiometric camera performance

    Science.gov (United States)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  13. Neutron counting with cameras

    Energy Technology Data Exchange (ETDEWEB)

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo [Institut Laue Langevin, Grenoble (France)

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)

  14. The Dark Energy Camera

    CERN Document Server

    Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

    2015-01-01

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

  15. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  16. The Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). et al.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  17. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  18. Cloud photogrammetry with dense stereo for fisheye cameras

    Science.gov (United States)

    Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens

    2016-11-01

    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

  19. ACL reconstruction

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/007208.htm ACL reconstruction To use the sharing features on this page, please enable JavaScript. ACL reconstruction is surgery to reconstruct the ligament in ...

  20. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  1. Development of X-ray CCD camera based X-ray micro-CT system.

    Science.gov (United States)

    Sarkar, Partha S; Ray, N K; Pal, Manoj K; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y; Sinha, A; Gadkari, S C

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  2. Development of X-ray CCD camera based X-ray micro-CT system

    Science.gov (United States)

    Sarkar, Partha S.; Ray, N. K.; Pal, Manoj K.; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y.; Sinha, A.; Gadkari, S. C.

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  3. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  4. CALIBRATION OF LOW COST RGB AND NIR UAV CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Fryskowska

    2016-06-01

    Full Text Available Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM, orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  5. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  6. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  7. Underwater camera with depth measurement

    Science.gov (United States)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  8. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  9. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    Science.gov (United States)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  10. Image Sensors Enhance Camera Technologies

    Science.gov (United States)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  11. Camera calibration correction in shape from inconsistent silhouette

    Science.gov (United States)

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  12. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  13. A stereo camera system for autonomous maritime navigation (AMN) vehicles

    Science.gov (United States)

    Zhang, Weihong; Zhuang, Ping; Elkins, Les; Simon, Rick; Gore, David; Cogar, Jeff; Hildebrand, Kevin; Crawford, Steve; Fuller, Joe

    2009-05-01

    Spatial Integrated System (SIS), Rockville, Maryland, in collaboration with NSWC Combatant Craft Division (NSWCCD), is applying 3D imaging technology, artificial intelligence, sensor fusion, behaviors-based control, and system integration to a prototype 40 foot, high performance Research and Development Unmanned Surface Vehicle (USV). This paper focus on the developments of the stereo camera system in the USV navigation that currently consists of two high-resolution cameras and will incorporate an array of cameras in the near future. The objectives of the camera system are to re-construct 3D objects and detect them in the sea water surface. The paper reviews two critical technological components, namely camera calibration and stereo matching. In stereo matching, a comprehensive study is presented to compare the algorithmic performances resulted from the various information sources (intensity, RGB values, Gaussian gradients and Gaussian Laplacians), patching schemas (single windows, and multiple windows with same/different centers), and correlation metrics (convolution, absolute difference, and histogram). To enhance system performance, a sub-pixel edge detection technique has been introduced to address the precision requirement and a noise removal post-processing step added to eliminate noisy points from the reconstructed 3D point clouds. Finally, experimental results are reported to demonstrate the performance of the stereo camera system.

  14. Iterative procedure for camera parameters estimation using extrinsic matrix decomposition

    Science.gov (United States)

    Goshin, Yegor V.; Fursov, Vladimir A.

    2016-03-01

    This paper addresses the problem of 3D scene reconstruction in cases when the extrinsic parameters (rotation and translation) of the camera are unknown. This problem is both important and urgent because the accuracy of the camera parameters significantly influences the resulting 3D model. A common approach is to determine the fundamental matrix from corresponding points on two views of a scene and then to use singular value decomposition for camera projection matrix estimation. However, this common approach is very sensitive to fundamental matrix errors. In this paper we propose a novel approach in which camera parameters are determined directly from the equations of the projective transformation by using corresponding points on the views. The proposed decomposition allows us to use an iterative procedure for determining the parameters of the camera. This procedure is implemented in two steps: the translation determination and the rotation determination. The experimental results of the camera parameters estimation and 3D scene reconstruction demonstrate the reliability of the proposed approach.

  15. Waterproof camera case for intraoperative photographs.

    Science.gov (United States)

    Raigosa, Mauricio; Benito-Ruiz, Jesús; Fontdevila, Joan; Ballesteros, José R

    2008-03-01

    Accurate photographic documentation has become essential in reconstructive and cosmetic surgery for both clinical and scientific purposes. Intraoperative photographs are important not only for record purposes, but also for teaching, publications, and presentations. Communication using images proves to be the superior way to persuade audiences. This article presents a simple and easy method for taking intraoperative photographs that uses a presterilized waterproof camera case. This method allows the user to take very good quality pictures with the photographic angle matching the surgeon's view, minimal interruption of the operative procedure, and minimal risk of contaminating the operative field.

  16. KINECT V2 AND RGB STEREO CAMERAS INTEGRATION FOR DEPTH MAP ENHANCEMENT

    OpenAIRE

    Ravanelli, R.; A. Nascetti; Crespi, M.

    2016-01-01

    Today range cameras are widespread low-cost sensors based on two different principles of operation: we can distinguish between Structured Light (SL) range cameras (Kinect v1, Structure Sensor, ...) and Time Of Flight (ToF) range cameras (Kinect v2, ...). Both the types are easy to use 3D scanners, able to reconstruct dense point clouds at high frame rate. However the depth maps obtained are often noisy and not enough accurate, therefore it is generally essential to improve their qual...

  17. The Compton Camera - medical imaging with higher sensitivity Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    The Compton Camera reconstructs the origin of Compton-scattered X-rays using electronic collimation with Silicon pad detectors instead of the heavy conventional lead collimators in Anger cameras - reaching up to 200 times better sensitivity and a factor two improvement in resolution. Possible applications are in cancer diagnosis, neurology neurobiology, and cardiology.

  18. Practical intraoperative stereo camera calibration.

    Science.gov (United States)

    Pratt, Philip; Bergeles, Christos; Darzi, Ara; Yang, Guang-Zhong

    2014-01-01

    Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.

  19. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  20. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  1. Combustion pinhole camera system

    Science.gov (United States)

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  2. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  3. Epipolar rectification method for a stereovision system with telecentric cameras

    Science.gov (United States)

    Liu, Haibo; Zhu, Zhaokun; Yao, Linshen; Dong, Jin; Chen, Shengyi; Zhang, Xiaohu; Shang, Yang

    2016-08-01

    3D metrology of a stereovision system requires epipolar rectification to be performed before dense stereo matching. In this study, we propose an epipolar rectification method for a stereovision system with two telecentric lens-based cameras. Given the orthographic projection matrices of each camera, the new projection matrices are computed by determining the new camera coordinates system in affine space and imposing some constraints on the intrinsic parameters. Then, the transformation that maps the old image planes on to the new image planes is achieved. Experiments are performed to validate the performance of the proposed rectification method. The test results show that the perpendicular distance and 3D reconstructed deviation obtained from the rectified images is not significantly higher than the corresponding values obtained from the original images. Considering the roughness of the extracted corner points and calibrated camera parameters, we can conclude that the proposed method can provide sufficiently accurate rectification results.

  4. The Star Formation Camera

    CERN Document Server

    Scowen, Paul A; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah; Rhoads, James; Roberge, Aki; Siegmund, Oswald; Shaklan, Stuart; Smith, Nathan; Stern, Daniel; Tumlinson, Jason; Windhorst, Rogier; Woodruff, Robert

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and their planetary systems, and to investigate and understand the range of environments, feedback mechanisms, and other factors that most affect the outcome of the star and planet formation process. This program addresses the origins and evolution of stars, galaxies, and cosmic structure and has direct relevance for the formation and survival of planetary systems like our Solar System and planets like Earth. We present the design and performance specifications resulting from the implementation study of the camera, conducted ...

  5. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  6. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  7. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  8. Lights, Camera, Reflection!

    Science.gov (United States)

    Mourlam, Daniel

    2013-01-01

    There are many ways to critique teaching, but few are more effective than video. Personal reflection through the use of video allows one to see what really happens in the classrooms--good and bad--and provides a visual path forward for improvement, whether it be in one's teaching, work with a particular student, or learning environment. This…

  9. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  10. Multiple-plane particle image velocimetry using a light-field camera.

    Science.gov (United States)

    Skupsch, Christoph; Brücker, Christoph

    2013-01-28

    Planar velocity fields in flows are determined simultaneously on parallel measurement planes by means of an in-house manufactured light-field camera. The planes are defined by illuminating light sheets with constant spacing. Particle positions are reconstructed from a single 2D recording taken by a CMOS-camera equipped with a high-quality doublet lens array. The fast refocusing algorithm is based on synthetic-aperture particle image velocimetry (SAPIV). The reconstruction quality is tested via ray-tracing of synthetically generated particle fields. The introduced single-camera SAPIV is applied to a convective flow within a measurement volume of 30 x 30 x 50 mm³.

  11. Measuring SO2 ship emissions with an ultraviolet imaging camera

    Science.gov (United States)

    Prata, A. J.

    2014-05-01

    Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.

  12. Breast Reconstruction

    Science.gov (United States)

    ... rebuild the shape of the breast. Instead of breast reconstruction, you could choose to wear a breast form ... one woman may not be right for another. Breast reconstruction may be done at the same time as ...

  13. Adaptive compressive sensing camera

    Science.gov (United States)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  14. Evaluation of guidewire path reproducibility.

    Science.gov (United States)

    Schafer, Sebastian; Hoffmann, Kenneth R; Noël, Peter B; Ionita, Ciprian N; Dmochowski, Jacek

    2008-05-01

    The number of minimally invasive vascular interventions is increasing. In these interventions, a variety of devices are directed to and placed at the site of intervention. The device used in almost all of these interventions is the guidewire, acting as a monorail for all devices which are delivered to the intervention site. However, even with the guidewire in place, clinicians still experience difficulties during the interventions. As a first step toward understanding these difficulties and facilitating guidewire and device guidance, we have investigated the reproducibility of the final paths of the guidewire in vessel phantom models on different factors: user, materials and geometry. Three vessel phantoms (vessel diameters approximately 4 mm) were constructed having tortuousity similar to the internal carotid artery from silicon tubing and encased in Sylgard elastomer. Several trained users repeatedly passed two guidewires of different flexibility through the phantoms under pulsatile flow conditions. After the guidewire had been placed, rotational c-arm image sequences were acquired (9 in. II mode, 0.185 mm pixel size), and the phantom and guidewire were reconstructed (512(3), 0.288 mm voxel size). The reconstructed volumes were aligned. The centerlines of the guidewire and the phantom vessel were then determined using region-growing techniques. Guidewire paths appear similar across users but not across materials. The average root mean square difference of the repeated placement was 0.17 +/- 0.02 mm (plastic-coated guidewire), 0.73 +/- 0.55 mm (steel guidewire) and 1.15 +/- 0.65 mm (steel versus plastic-coated). For a given guidewire, these results indicate that the guidewire path is relatively reproducible in shape and position.

  15. Digital camera in ophthalmology

    Directory of Open Access Journals (Sweden)

    Ashish Mitra

    2015-01-01

    Full Text Available Ophthalmology is an expensive field and imaging is an indispensable modality in ophthalmology; and in developing countries including India, it is not possible for every ophthalmologist to afford slit-lamp photography unit. We here present our experience of slit-lamp photography using digital camera. Good quality pictures of anterior and posterior segment disorders were captured using readily available devices. It can be a used as a good teaching tool for residents learning ophthalmology and can also be a method to document lesions which at many times is necessary for medicolegal purposes. It's a technique which is simple, inexpensive, and has a short learning curve.

  16. Photorealistic image synthesis and camera validation from 2D images

    Science.gov (United States)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  17. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  18. HONEY -- The Honeywell Camera

    Science.gov (United States)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  19. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  20. PAU camera: detectors characterization

    Science.gov (United States)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  1. Optimization of the performance of a pixellated germanium Compton camera

    OpenAIRE

    Ghoggali, W.

    2015-01-01

    A planar HPGe Compton camera for nuclear medicine applications that contains 177 pixels of 4 × 4mm2, of which 25 are at the back detector, is being used to image point sources of Cs137, line sources and clinical-like shape distributed sources. Experimental results are obtained to study the e ffects of energy resolution, position sensitivity, and reconstruction algorithms on camera images. Preamplifi ed pulses are digitized for pulse shape analysis using gamma ray tracking GRT4s data acquisiti...

  2. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    DEFF Research Database (Denmark)

    Kristoffersen, Miklas Strøm; Dueholm, Jacob Velling; Gade, Rikke

    2016-01-01

    and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm...... for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows...

  3. Time-of-flight cameras principles, methods and applications

    CERN Document Server

    Hansard, Miles; Choi, Ouk; Horaud, Radu

    2012-01-01

    Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in order to increase the scene coverage, and to combine the depth data with images from several colour came

  4. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  5. Stereo Calibration and Rectification for Omnidirectional Multi-camera Systems

    Directory of Open Access Journals (Sweden)

    Yanchang Wang

    2012-10-01

    Full Text Available Stereo vision has been studied for decades as a fundamental problem in the field of computer vision. In recent years, computer vision and image processing with a large field of view, especially using omnidirectional vision and panoramic images, has been receiving increasing attention. An important problem for stereo vision is calibration. Although various kinds of calibration methods for omnidirectional cameras are proposed, most of them are limited to calibrate catadioptric cameras or fish‐eye cameras and cannot be applied directly to multi‐camera systems. In this work, we propose an easy calibration method with closed‐form initialization and iterative optimization for omnidirectional multi‐camera systems. The method only requires image pairs of the 2D target plane in a few different views. A method based on the spherical camera model is also proposed for rectifying omnidirectional stereo pairs. Using real data captured by Ladybug3, we carry out some experiments, including stereo calibration, rectification and 3D reconstruction. Statistical analyses and comparisons of the experimental results are also presented. As the experimental results show, the calibration results are precise and the effect of rectification is promising.

  6. Aircraft path planning for optimal imaging using dynamic cost functions

    Science.gov (United States)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  7. Porcelain three-dimensional shape reconstruction and its color reconstruction

    Science.gov (United States)

    Yu, Xiaoyang; Wu, Haibin; Yang, Xue; Yu, Shuang; Wang, Beiyi; Chen, Deyun

    2013-01-01

    In this paper, structured light three-dimensional measurement technology was used to reconstruct the porcelain shape, and further more the porcelain color was reconstructed. So the accurate reconstruction of the shape and color of porcelain was realized. Our shape measurement installation drawing is given. Because the porcelain surface is color complex and highly reflective, the binary Gray code encoding is used to reduce the influence of the porcelain surface. The color camera was employed to obtain the color of the porcelain surface. Then, the comprehensive reconstruction of the shape and color was realized in Java3D runtime environment. In the reconstruction process, the space point by point coloration method is proposed and achieved. Our coloration method ensures the pixel corresponding accuracy in both of shape and color aspects. The porcelain surface shape and color reconstruction experimental results completed by proposed method and our installation, show that: the depth range is 860 ˜ 980mm, the relative error of the shape measurement is less than 0.1%, the reconstructed color of the porcelain surface is real, refined and subtle, and has the same visual effect as the measured surface.

  8. Stochastic reconstruction of sandstones

    Science.gov (United States)

    Manwart; Torquato; Hilfer

    2000-07-01

    A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples.

  9. Auto-preview camera orientation for environment perception on a mobile robot

    Science.gov (United States)

    Radovnikovich, Micho; Vempaty, Pavan K.; Cheok, Ka C.

    2010-01-01

    Using wide-angle or omnidirectional camera lenses to increase a mobile robot's field of view introduces nonlinearity in the image due to the 'fish-eye' effect. This complicates distance perception, and increases image processing overhead. Using multiple cameras avoids the fish-eye complications, but involves using more electrical and processing power to interface them to a computer. Being able to control the orientation of a single camera, both of these disadvantages are minimized while still allowing the robot to preview a wider area. In addition, controlling the orientation allows the robot to optimize its environment perception by only looking where the most useful information can be discovered. In this paper, a technique is presented that creates a two dimensional map of objects of interest surrounding a mobile robot equipped with a panning camera on a telescoping shaft. Before attempting to negotiate a difficult path planning situation, the robot takes snapshots at different camera heights and pan angles and then produces a single map of the surrounding area. Distance perception is performed by making calibration measurements of the camera and applying coordinate transformations to project the camera's findings into the vehicle's coordinate frame. To test the system, obstacles and lines were placed to form a chicane. Several snapshots were taken with different camera orientations, and the information from each were stitched together to yield a very useful map of the surrounding area for the robot to use to plan a path through the chicane.

  10. Online coupled camera pose estimation and dense reconstruction from video

    Science.gov (United States)

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  11. Dynamic Human Body Modeling Using a Single RGB Camera

    Directory of Open Access Journals (Sweden)

    Haiyu Zhu

    2016-03-01

    Full Text Available In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  12. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  13. Camera artifacts in IUE spectra

    Science.gov (United States)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  14. Radiation camera motion correction system

    Science.gov (United States)

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  15. Corporation Deadlock Solution:Predicament Analysis and Path Reconstruction%公司僵局的司法解决之道:困境解析与路径重构∗

    Institute of Scientific and Technical Information of China (English)

    李瑞缘

    2015-01-01

    China ’ s current company law creatively added provisions of judicial dissolution of company deadlock, judicial relief opened channels to resolve the company deadlock. However, the judicial man-datory dissolution to resolve the impasse that a single path selection mode can not cope with the deadlock in case of complex practice and respond effectively. Due to the company autonomy has inevitable negative externalities, the legitimacy of judicial intervention to resolve the company deadlock is self-evident, but the intervention of the epitaxy must be defined clearly. Needless to say, the company deadlock relief mechanism improvement should pay attention to the prevention in articles of association, to perfect legis-lation and to broaden the judicial relief path on the basis of an antidote against the disease, the preferred choice of compulsory share transfer, mandatory separation of the company, enterprises new receiver and other alternative measures should be put into priority and judicial dissolution system should be the last re-sort to choose.%我国现行公司法创造性的增设了公司僵局司法解散条款之规定,为化解公司僵局开设了司法救济之通道。然而,司法强制解散化解僵局这一单一化路径选择模式难以应对实践中复杂的僵局案件并作出有效回应。鉴于公司自治存在不可避免的负外部性,司法介入化解公司僵局其正当性不言而喻,但必须明确界定其介入之外延。毋庸讳言,公司僵局司法救济机制之完善应在注重章程预防、完善立法及拓宽司法救济路径之基础上对症下药,优先选择强制股权转让、强制公司分立、寻找企业新接管人等替代性措施,司法强制解散应为不得已而为之的最后选择路径。

  16. Path Integrals and Hamiltonians

    Science.gov (United States)

    Baaquie, Belal E.

    2014-03-01

    1. Synopsis; Part I. Fundamental Principles: 2. The mathematical structure of quantum mechanics; 3. Operators; 4. The Feynman path integral; 5. Hamiltonian mechanics; 6. Path integral quantization; Part II. Stochastic Processes: 7. Stochastic systems; Part III. Discrete Degrees of Freedom: 8. Ising model; 9. Ising model: magnetic field; 10. Fermions; Part IV. Quadratic Path Integrals: 11. Simple harmonic oscillators; 12. Gaussian path integrals; Part V. Action with Acceleration: 13. Acceleration Lagrangian; 14. Pseudo-Hermitian Euclidean Hamiltonian; 15. Non-Hermitian Hamiltonian: Jordan blocks; 16. The quartic potential: instantons; 17. Compact degrees of freedom; Index.

  17. Path Problems in Networks

    CERN Document Server

    Baras, John

    2010-01-01

    The algebraic path problem is a generalization of the shortest path problem in graphs. Various instances of this abstract problem have appeared in the literature, and similar solutions have been independently discovered and rediscovered. The repeated appearance of a problem is evidence of its relevance. This book aims to help current and future researchers add this powerful tool to their arsenal, so that they can easily identify and use it in their own work. Path problems in networks can be conceptually divided into two parts: A distillation of the extensive theory behind the algebraic path pr

  18. Coherent infrared imaging camera (CIRIC)

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  19. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    known as ‘the poetics of cinema.’ The dissertation embraces two branches of research within this perspective: stylistics and historical poetics (stylistic history). The dissertation takes on three questions in relation to camera movement and is accordingly divided into three major sections. The first...... section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... to illustrate how the functions may mesh in individual camera movements six concrete examples are analyzed. The analyses illustrate how the taxonomy presented can substantiate analysis and interpretation of film style. More generally, the dissertation - and particularly these in-depth analyses - illustrates how...

  20. Blob-enhanced reconstruction technique

    Science.gov (United States)

    Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2016-09-01

    A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the

  1. Multi-Dimensional Path Queries

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    1998-01-01

    We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...... to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments...

  2. The Reconstruction of Identity:the Path of Improving the Rural Preschool Teachers' Social Status%身份重构:提高农村幼儿教师社会地位之路径

    Institute of Scientific and Technical Information of China (English)

    单文顶; 袁爱玲

    2015-01-01

    Our survey found that in Guangdong province, the rural preschool teachers have a relatively low social status, mainly in low income, low social prestige, non-professional person image and the lack of professional autonomy. Indeed, the preschool teachers' low social status has produced a huge impact on preschool education, such as instability of teaching staff, changing occupation, no outstanding graduates, the lower education quality. Reconstruction of preschool teachers' identity is the inevitable demands of improving teachers' social status. Basing on the theory of social stratification, we propose three dimensions to reconstruct preschool teachers' identity, that is, preschool teachers are a natural person, man of knowledge and professionals. Each dimensions as an indivisible part of the whole, building the rural preschool teachers' social status.%调研发现,广东省农村幼儿教师社会地位较低,主要表现在工资收入低、社会声望低、非专业人形象和专业自主权缺乏。幼儿教师社会地位较低的事实对广东省农村的幼儿教育已经产生了巨大的影响,如师资队伍不稳定、教师离职或转行、招不到优秀毕业生、教育质量较差等。重构幼儿教师身份是提高教师社会地位的必然诉求。基于社会分层理论,可以从三个维度重构幼儿教师身份:幼儿教师是自然人;幼儿教师是知识人;幼儿教师是专业人。三个维度,不可须臾相离,共筑农村幼儿教师社会地位。

  3. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  4. Camera sensitivity study

    Science.gov (United States)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  5. Proportional counter radiation camera

    Science.gov (United States)

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  6. OPERA goes on camera

    CERN Multimedia

    2007-01-01

    OPERA, the experiment which uses the neutrino beam of CERN’s CNGS facility, has delivered its first neutrino "photos". The core of the detector has been commissioned and has produced images of events resulting from neutrino collisions. The reconstruction of the core (a few cubic millimetres!) of a neutrino interaction at OPERA. The neutrino arriving from the left of the image has interacted with the lead of a brick, producing various particles identifiable by their tracks visible in the emulsion.The snapshot is tiny but it was greeted with enthusiasm by the physicists of OPERA. On 2 October, for the first time, the experiment at the Gran Sasso Laboratory in Italy "photographed" an event produced by the beam of neutrinos sent from CERN, 732 kilometres away. One of the 60,000 photosensitive bricks already installed at the heart of the experiment had produced its first particle track. The commissioning of the OPERA experiment began la...

  7. Penile reconstruction

    Institute of Scientific and Technical Information of China (English)

    Giulio Garaffa; Salvatore Sansalone; David J Ralph

    2013-01-01

    During the most recent years,a variety of new techniques of penile reconstruction have been described in the literature.This paper focuses on the most recent advances in male genital reconstruction after trauma,excision of benign and malignant disease,in gender reassignment surgery and aphallia with emphasis on surgical technique,cosmetic and functional outcome.

  8. 3D Reconstruction Technique for Tomographic PIV

    Institute of Scientific and Technical Information of China (English)

    姜楠; 包全; 杨绍琼

    2015-01-01

    Tomographic particle image velocimetry(Tomo-PIV) is a state-of-the-art experimental technique based on a method of optical tomography to achieve the three-dimensional(3D) reconstruction for three-dimensional three-component(3D-3C) flow velocity measurements. 3D reconstruction for Tomo-PIV is carried out herein. Meanwhile, a 3D simplified tomographic reconstruction model reduced from a 3D volume light inten-sity field with 2D projection images into a 2D Tomo-slice plane with 1D projecting lines, i.e., simplifying this 3D reconstruction into a problem of 2D Tomo-slice plane reconstruction, is applied thereafter. Two kinds of the most well-known algebraic reconstruction techniques, algebraic reconstruction technique(ART) and multiple algebraic reconstruction technique(MART), are compared as well. The principles of the two reconstruction algorithms are discussed in detail, which has been performed by a series of simulation images, yielding the corresponding recon-struction images that show different features between the ART and MART algorithm, and then their advantages and disadvantages are discussed. Further discussions are made for the standard particle image reconstruction when the background noise of the pre-initial particle image has been removed. Results show that the particle image recon-struction has been greatly improved. The MART algorithm is much better than the ART. Furthermore, the computa-tional analyses of two parameters(the particle density and the number of cameras), are performed to study their effects on the reconstruction. Lastly, the 3D volume particle field is reconstructed by using the improved algorithm based on the simplified 3D tomographic reconstruction model, which proves that the algorithm simplification is feasible and it can be applied to the reconstruction of 3D volume particle field in a Tomo-PIV system.

  9. A multi-criteria approach to camera motion design for volume data animation.

    Science.gov (United States)

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  10. Unique Path Partitions

    DEFF Research Database (Denmark)

    Bessenrodt, Christine; Olsson, Jørn Børling; Sellers, James A.

    2013-01-01

    We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions.......We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions....

  11. Path dependence and creation

    DEFF Research Database (Denmark)

    Garud, Raghu; Karnøe, Peter

    the place of agency in these theories that take history so seriously. In the end, they are as interested in path creation and destruction as they are in path dependence. This book is compiled of both theoretical and empirical writing. It shows relatively well-known industries such as the automobile...

  12. An Assignment Scheme to Control Multiple Pan/Tilt Cameras for 3D Video

    Directory of Open Access Journals (Sweden)

    Sofiane Yous

    2007-02-01

    Full Text Available This paper presents an assignment scheme to control multiple Pan/Tilt (PT cameras for 3D video of a moving object. The system combines static wide field of view (FOV cameras and active Pan/Tilt (PT cameras with narrow FOV within a networked platform. We consider the general case where the active cameras have as high resolution as they can capture only partial views of the object. The major issue is the automatic assignment of each active camera to an appropriate part of the object in order to get high-resolution images of the whole object. We propose an assignment scheme based on the analysis of a coarse 3D shape produced in a preprocessing step based on the wide-FOV images. For each high-resolution camera, we evaluate the visibility toward the different parts of the shape, corresponding to different orientations of the camera and with respect to its FOV. Then, we assign each camera to one orientation in order to get high visibility of the whole object. The continuously captured images are saved to be used offline in the reconstruction of the object. For a temporal extension of this scheme, we involve, in addition to the visibility analysis, the last camera orientation as an additional constraint. This allows smooth and optimized camera movements.

  13. Women's Creation of Camera Phone Culture

    Directory of Open Access Journals (Sweden)

    Dong-Hoo Lee

    2005-01-01

    Full Text Available A major aspect of the relationship between women and the media is the extent to which the new media environment is shaping how women live and perceive the world. It is necessary to understand, in a concrete way, how the new media environment is articulated to our gendered culture, how the symbolic or physical forms of the new media condition women’s experiences, and the degree to which a ‘post-gendered re-codification’ can be realized within a new media environment. This paper intends to provide an ethnographic case study of women’s experiences with camera phones, examining the extent to which these experiences recreate or reconstruct women’s subjectivity or identity. By taking a close look at the ways in which women utilize and appropriate the camera phone in their daily lives, it focuses not only on women’s cultural practices in making meanings but also on their possible effect in the deconstruction of gendered techno-culture.

  14. Foreground extraction for moving RGBD cameras

    Science.gov (United States)

    Junejo, Imran N.; Ahmed, Naveed

    2017-02-01

    In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.

  15. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  16. Hi-G electronic gated camera for precision trajectory analysis

    Science.gov (United States)

    Snyder, Donald R.; Payne, Scott; Keller, Ed; Longo, Salvatore; Caudle, Dennis E.; Walker, Dennis C.; Sartor, Mark A.; Keeler, Joe E.; Kerr, David A.; Fail, R. Wallace; Gannon, Jim; Carrol, Ernie; Jamison, Todd A.

    1997-12-01

    trajectory, timing, and advanced sensor development. This system will be used for ground tracking data reduction in support of small air vehicle and munition testing. It will provide a means of integrating the imagery and telemetry data from the item with ground based photographic support. The technique we have designed will exploit off-the-shelf software and analysis components. A differential GPS survey instrument will establish a photogrammetric calibration grid throughout the range and reference targets along the flight path. Images from the on-board sensor will be used to calibrate the ortho- rectification model in the analysis software. The projectile images will be transmitted and recorded on several tape recorders to insure complete capture of each video field. The images will be combined with a non-linear video editor into a time-correlated record. Each correlated video field will be written to video disk. The files will be converted to DMA compatible format and then analyzed for determination of the projectile altitude, attitude and position in space. The resulting data file will be used to create a photomosaic of the ground the projectile flew over and the targets it saw. The data will be then transformed to a trajectory file and used to generate a graphic overlay that will merge digital photo data of the range with actual images captured. The plan is to superimpose the flight path of the projectile, the path of the weapons aimpoint, and annotation of each internal sequence event. With tools used to produce state-of-the-art computer graphics, we now think it will be possible to reconstruct the test event from the viewpoint of the warhead, the target, and a 'God's-Eye' view looking over the shoulder of the projectile.

  17. Free path groupoid grading on Leavitt path algebras

    OpenAIRE

    Goncalves, Daniel; Yoneda, Gabriela

    2015-01-01

    In this work we realize Leavitt path algebras as partial skew groupoid rings. This yields a free path groupoid grading on Leavitt path algebras. Using this grading we characterize free path groupoid graded isomorphisms of Leavitt path algebras that preserves generators.

  18. Vision Sensors and Cameras

    Science.gov (United States)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  19. Toward Long Distance, Sub-diffraction Imaging Using Coherent Camera Arrays

    CERN Document Server

    Holloway, Jason; Sharma, Manoj Kumar; Matsuda, Nathan; Horstmeyer, Roarke; Cossairt, Oliver; Veeraraghavan, Ashok

    2015-01-01

    In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor of ten and beyond. Recent advances in ptychography have demonstrated that one can image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an X-Y translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10x and more are achievabl...

  20. Path Creation, Path Dependence and Breaking Away from the Path

    DEFF Research Database (Denmark)

    Wang, Jens Erik; Hedman, Jonas; Tuunainen, Virpi Kristiina

    2016-01-01

    The explanation of how and why firms succeed or fail is a recurrent research challenge. This is particularly important in the context of technological innovations. We focus on the role of historical events and decisions in explaining such success and failure. Using a case study of Nokia, we develop...... the importance of intermediate outcomes, which in the case of Nokia was the importance of software ecosystems and adaptable mobile devices. Furthermore, we show how the layers of path dependence mutually reinforce each other and become stronger....

  1. Corporate governance structure:Reconstruction path selection of university and external relationship--Case Study Based on G College%法人治理结构:重构高校与外部关系的路径选择--基于G学院的案例研究

    Institute of Scientific and Technical Information of China (English)

    袁勇; 许良葵; 程辛荣

    2014-01-01

    “National long-term Education Reform and Development Plan(2010-2020)”refers building new relationship between government, school and society. The key to reconstruct of the relationship between university, government and society is to solve excessive government intervention in school and the shortage of community participation in school education .G College explores the corporate governance structure. Besides improving internal governance structure, building external governance structure for the university, government, society and providing consultative institutional arrangements and communication platform is reconstructing path Selection of university and external relationship.%《国家中长期教育改革和发展规划纲要(2010-2020年)》提出要“构建政府、学校、社会之间新型关系”。重构高校与政府、社会之间的关系,关键在于解决政府对学校干预过多,以及社会参与学校办学不足的问题。G学院探索法人治理结构,在不断完善内部治理结构同时,构建外部治理结构,为高校、政府、社会联系提供一种协商的制度安排和沟通平台,是重构高校与外部关系的路径选择。

  2. Zero-Slack, Noncritical Paths

    Science.gov (United States)

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  3. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    Science.gov (United States)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  4. Ligament reconstruction.

    Science.gov (United States)

    Glickel, Steven Z; Gupta, Salil

    2006-05-01

    Volar ligament reconstruction is an effective technique for treating symptomatic laxity of the CMC joint of the thumb. The laxity may bea manifestation of generalized ligament laxity,post-traumatic, or metabolic (Ehler-Danlos). There construction reduces the shear forces on the joint that contribute to the development and persistence of inflammation. Although there have been only a few reports of the results of volar ligament reconstruction, the use of the procedure to treat Stage I and Stage II disease gives good to excellent results consistently. More advanced stages of disease are best treated by trapeziectomy, with or without ligament reconstruction.

  5. Status of the FACT camera

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Quirin [ETH Zurich, Institute for Particle Physics, 8093 Zurich (Switzerland); Collaboration: FACT-Collaboration

    2011-07-01

    The First G-APD Cherenkov Telescope (FACT) project develops a novel camera type for very high energy gamma-ray astronomy. A total of 1440 Geiger-mode avalanche photodiodes (G-APD) are used for light detection, each accompanied by a solid light concentrator. All electronics for analog signal processing, digitization and triggering are fully integrated into the camera body. The event data are sent via Ethernet to the counting house. In order to compensate for gain variations of the G-APDs an online feedback system analyzing calibration light pulses is employed. Once the construction and commissioning of the camera is finished it will be transported to La Palma, Canary Islands, and mounted on the refurbished HEGRA CT3 telescope structure. In this talk the architecture and status of the FACT camera is presented.

  6. An Inexpensive Digital Infrared Camera

    Science.gov (United States)

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  7. Depth Cameras on UAVs: a First Approach

    Science.gov (United States)

    Deris, A.; Trigonis, I.; Aravanis, A.; Stathopoulou, E. K.

    2017-02-01

    Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive stereo depth calculation, mounted on an Unmanned Aerial Vehicle with an ad-hoc setup, specially designed for outdoor scene applications. Towards this direction, the results of its depth calculations and scene reconstruction generated by Simultaneous Localization and Mapping (SLAM) algorithms are compared and evaluated based on qualitative and quantitative criteria with respect to the ones derived by a typical Structure from Motion (SfM) and Multiple View Stereo (MVS) pipeline for a challenging cultural heritage application.

  8. High resolution measurements with silicon drift detectors for Compton camera applications

    OpenAIRE

    Çonka Nurdan, Tuba

    2006-01-01

    The accurate and rapid location of the radionuclide distribution in radioactively labeled tissue or organs is the goal of nuclear medicine. The Compton camera, in principle, can improve the spatial resolution and effiency with respect to today's PET and SPECT techniques. Since it is necessary to reconstruct a full scattering event in the Compton camera, the detector technology is very demanding. Useful detectors have not been available in the past. However, a new detector type, the Silicon Dr...

  9. Fast and Practical Head Tracking in Brain Imaging with Time-of-Flight Camera

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Jensen, Rasmus Ramsbøl

    2013-01-01

    This paper investigates the potential use of Time-of-Flight cameras (TOF) for motion correction in medical brain scans. TOF cameras have previously been used for tracking purposes, but recent progress in TOF technology has made it relevant for high speed optical tracking in high resolution medica...... of expensive triangulation and surface reconstruction. Tracking experiments with a motion controlled head phantom were performed with a translational tracking error below 2mm and a rotational tracking error below 0.5°....

  10. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  11. Breast Reconstruction

    Science.gov (United States)

    ... senos Preguntas Para el Médico Datos Para la Vida Komen El cuidado de sus senos:Consejos útiles ... can help . Cost Federal law requires most insurance plans cover the cost of breast reconstruction. Learn more ...

  12. Climate Reconstructions

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...

  13. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Finch, S.M.; McMakin, A.H. (comps.)

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  14. Noise evaluation of Compton camera imaging for proton therapy

    CERN Document Server

    Ortega, P G; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-01-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the recons...

  15. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    Science.gov (United States)

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels.

  16. 3D precision measurements of meter sized surfaces using low cost illumination and camera techniques

    Science.gov (United States)

    Ekberg, Peter; Daemi, Bita; Mattsson, Lars

    2017-04-01

    Using dedicated stereo camera systems and structured light is a well-known method for measuring the 3D shape of large surfaces. However the problem is not trivial when high accuracy, in the range of few tens of microns, is needed. Many error sources need to be handled carefully in order to obtain high quality results. In this study, we present a measurement method based on low-cost camera and illumination solutions combined with high-precision image analysis and a new approach in camera calibration and 3D reconstruction. The setup consists of two ordinary digital cameras and a Gobo projector as a structured light source. A matrix of dots is projected onto the target area. The two cameras capture the images of the projected pattern on the object. The images are processed by advanced subpixel resolution algorithms prior to the application of the 3D reconstruction technique. The strength of the method lays in a different approach for calibration, 3D reconstruction, and high-precision image analysis algorithms. Using a 10 mm pitch pattern of the light dots, the method is capable of reconstructing the 3D shape of surfaces. The precision (1σ repeatability) in the measurements is  cost of ~2% of available advanced measurement techniques. The expanded uncertainty (95% confidence level) is estimated to be 83 µm, with the largest uncertainty contribution coming from the absolute length of the metal ruler used as reference.

  17. Tortuous path chemical preconcentrator

    Science.gov (United States)

    Manginell, Ronald P.; Lewis, Patrick R.; Adkins, Douglas R.; Wheeler, David R.; Simonson, Robert J.

    2010-09-21

    A non-planar, tortuous path chemical preconcentrator has a high internal surface area having a heatable sorptive coating that can be used to selectively collect and concentrate one or more chemical species of interest from a fluid stream that can be rapidly released as a concentrated plug into an analytical or microanalytical chain for separation and detection. The non-planar chemical preconcentrator comprises a sorptive support structure having a tortuous flow path. The tortuosity provides repeated twists, turns, and bends to the flow, thereby increasing the interfacial contact between sample fluid stream and the sorptive material. The tortuous path also provides more opportunities for desorption and readsorption of volatile species. Further, the thermal efficiency of the tortuous path chemical preconcentrator is comparable or superior to the prior non-planar chemical preconcentrator. Finally, the tortuosity can be varied in different directions to optimize flow rates during the adsorption and desorption phases of operation of the preconcentrator.

  18. Career Path Descriptions

    CERN Document Server

    Charkiewicz, A

    2000-01-01

    Before the Career Path system, jobs were classified according to grades with general statutory definitions, guided by the "Job Catalogue" which defined 6 evaluation criteria with example illustrations in the form of "typical" job descriptions. Career Paths were given concise statutory definitions necessitating a method of description and evaluation adapted to their new wider-band salary concept. Evaluations were derived from the same 6 criteria but the typical descriptions became unusable. In 1999, a sub-group of the Standing Concertation Committee proposed a new guide for describing Career Paths, adapted to their wider career concept by expanding the 6 evaluation criteria into 9. For each criterion several levels were established tracing the expected evolution of job level profiles and personal competencies over their longer salary ranges. While providing more transparency to supervisors and staff, the Guide's official use would be by services responsible for vacancy notices, Career Path evaluations and rela...

  19. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    Science.gov (United States)

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  20. Trajectory Generation and Path Planning for Autonomous Aerobots

    Science.gov (United States)

    Sharma, Shivanjli; Kulczycki, Eric A.; Elfes, Alberto

    2007-01-01

    This paper presents global path planning algorithms for the Titan aerobot based on user defined waypoints in 2D and 3D space. The algorithms were implemented using information obtained through a planner user interface. The trajectory planning algorithms were designed to accurately represent the aerobot's characteristics, such as minimum turning radius. Additionally, trajectory planning techniques were implemented to allow for surveying of a planar area based solely on camera fields of view, airship altitude, and the location of the planar area's perimeter. The developed paths allow for planar navigation and three-dimensional path planning. These calculated trajectories are optimized to produce the shortest possible path while still remaining within realistic bounds of airship dynamics.

  1. Path Optimization Using APSO

    Directory of Open Access Journals (Sweden)

    Deepak Goyal

    2013-07-01

    Full Text Available This paper addresses the malicious node detection and path optimization problem for wireless sensor networks. Malicious node detection in neighborhood is a needed because that node may cause incorrect decisions or energy depletion. In this paper APSO (combination of Artificial bee colony and particular swarm optimization is used to choose an optimized path. Through this improved version we will overcome the disadvantage of local optimal which comes when we use PSO approach.

  2. Paths to nursing leadership.

    Science.gov (United States)

    Bondas, Terese

    2006-07-01

    The aim was to explore why nurses enter nursing leadership and apply for a management position in health care. The study is part of a research programme in nursing leadership and evidence-based care. Nursing has not invested enough in the development of nursing leadership for the development of patient care. There is scarce research on nurses' motives and reasons for committing themselves to a career in nursing leadership. A strategic sample of 68 Finnish nurse leaders completed a semistructured questionnaire. Analytic induction was applied in an attempt to generate a theory. A theory, Paths to Nursing Leadership, is proposed for further research. Four different paths were found according to variations between the nurse leaders' education, primary commitment and situational factors. They are called the Path of Ideals, the Path of Chance, the Career Path and the Temporary Path. Situational factors and role models of good but also bad nursing leadership besides motivational and educational factors have played a significant role when Finnish nurses have entered nursing leadership. The educational requirements for nurse leaders and recruitment to nursing management positions need serious attention in order to develop a competent nursing leadership.

  3. The determination of the intrinsic and extrinsic parameters of virtual camera based on OpenGL

    Science.gov (United States)

    Li, Suqi; Zhang, Guangjun; Wei, Zhenzhong

    2006-11-01

    OpenGL is the international standard of 3D image. The 3D image generation by OpenGL is similar to the shoot by camera. This paper focuses on the application of OpenGL to computer vision, the OpenGL 3D image is regarded as virtual camera image. Firstly, the imaging mechanism of OpenGL has been analyzed in view of perspective projection transformation of computer vision camera. Then, the relationship between intrinsic and extrinsic parameters of camera and function parameters in OpenGL has been analysed, the transformation formulas have been deduced. Thereout the computer vision simulation has been realized. According to the comparison between the actual CCD camera images and virtual camera images(the parameters of actual camera are the same as virtual camera's) and the experiment results of stereo vision 3D reconstruction simulation, the effectiveness of the method with which the intrinsic and extrinsic parameters of virtual camera based on OpenGL are determined has been verified.

  4. An industrial light-field camera applied for 3D velocity measurements in a slot jet

    Science.gov (United States)

    Seredkin, A. V.; Shestakov, M. V.; Tokarev, M. P.

    2016-10-01

    Modern light-field cameras have found their application in different areas like photography, surveillance and quality control in industry. A number of studies have been reported relatively low spatial resolution of 3D profiles of registered objects along the optical axis of the camera. This article describes a method for 3D velocity measurements in fluid flows using an industrial light-field camera and an alternative reconstruction algorithm based on a statistical approach. This method is more accurate than triangulation when applied for tracking small registered objects like tracer particles in images. The technique was used to measure 3D velocity fields in a turbulent slot jet.

  5. Proton computed tomography images with algebraic reconstruction

    Science.gov (United States)

    Bruzzi, M.; Civinini, C.; Scaringella, M.; Bonanno, D.; Brianzi, M.; Carpinelli, M.; Cirrone, G. A. P.; Cuttone, G.; Presti, D. Lo; Maccioni, G.; Pallotta, S.; Randazzo, N.; Romano, F.; Sipala, V.; Talamonti, C.; Vanzi, E.

    2017-02-01

    A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to 1% and spatial resolutions CT in hadron-therapy.

  6. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    Full Text Available Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors

  7. Traditional gamma cameras are preferred.

    Science.gov (United States)

    DePuey, E Gordon

    2016-08-01

    Although the new solid-state dedicated cardiac cameras provide excellent spatial and energy resolution and allow for markedly reduced SPECT acquisition times and/or injected radiopharmaceutical activity, they have some distinct disadvantages compared to traditional sodium iodide SPECT cameras. They are expensive. Attenuation correction is not available. Cardio-focused collimation, advantageous to increase depth-dependent resolution and myocardial count density, accentuates diaphragmatic attenuation and scatter from subdiaphragmatic structures. Although supplemental prone imaging is therefore routinely advised, many patients cannot tolerate it. Moreover, very large patients cannot be accommodated in the solid-state camera gantries. Since data are acquired simultaneously with an arc of solid-state detectors around the chest, no temporally dependent "rotating" projection images are obtained. Therefore, patient motion can be neither detected nor corrected. In contrast, traditional sodium iodide SPECT cameras provide rotating projection images to allow technologists and physicians to detect and correct patient motion and to accurately detect the position of soft tissue attenuators and to anticipate associated artifacts. Very large patients are easily accommodated. Low-dose x-ray attenuation correction is widely available. Also, relatively inexpensive low-count density software is provided by many vendors, allowing shorter SPECT acquisition times and reduced injected activity approaching that achievable with solid-state cameras.

  8. The effect of varying path properties in path steering tasks

    NARCIS (Netherlands)

    Liu, L.; Liere, R. van

    2010-01-01

    Path steering is a primitive 3D interaction task that requires the user to navigate through a path of a given length and width. In a previous paper, we have conducted controlled experiments in which users operated a pen input device to steer a cursor through a 3D path subject to fixed path propertie

  9. A testbed for wide-field, high-resolution, gigapixel-class cameras.

    Science.gov (United States)

    Kittle, David S; Marks, Daniel L; Son, Hui S; Kim, Jungsang; Brady, David J

    2013-05-01

    The high resolution and wide field of view (FOV) of the AWARE (Advanced Wide FOV Architectures for Image Reconstruction and Exploitation) gigapixel class cameras present new challenges in calibration, mechanical testing, and optical performance evaluation. The AWARE system integrates an array of micro-cameras in a multiscale design to achieve gigapixel sampling at video rates. Alignment and optical testing of the micro-cameras is vital in compositing engines, which require pixel-level accurate mappings over the entire array of cameras. A testbed has been developed to automatically calibrate and measure the optical performance of the entire camera array. This testbed utilizes translation and rotation stages to project a ray into any micro-camera of the AWARE system. A spatial light modulator is projected through a telescope to form an arbitrary object space pattern at infinity. This collimated source is then reflected by an elevation stage mirror for pointing through the aperture of the objective into the micro-optics and eventually the detector of the micro-camera. Different targets can be projected with the spatial light modulator for measuring the modulation transfer function (MTF) of the system, fiducials in the overlap regions for registration and compositing, distortion mapping, illumination profiles, thermal stability, and focus calibration. The mathematics of the testbed mechanics are derived for finding the positions of the stages to achieve a particular incident angle into the camera, along with calibration steps for alignment of the camera and testbed coordinate axes. Measurement results for the AWARE-2 gigapixel camera are presented for MTF, focus calibration, illumination profile, fiducial mapping across the micro-camera for registration and distortion correction, thermal stability, and alignment of the camera on the testbed.

  10. KALI Camera: mid-infrared camera for the Keck Interferometer Nuller

    Science.gov (United States)

    Creech-Eakman, Michelle J.; Moore, James D.; Palmer, Dean L.; Serabyn, Eugene

    2003-03-01

    We present a brief overview of the KALI Camera, the mid-infrared camera for the Keck Interferometer Nulling Project, built at the Jet Propulsion Laboratory. The instrument utilizes mainly transmissive optics in four identical beam paths to spatially and spectrally filter, polarize, spectrally disperse and image the incoming 7-14 micron light from the four outputs of the Keck Nulling Beam Combiner onto a custom Boeing/DRS High Flux 128 X 128 BIB array. The electronics use a combination of JPL and Wallace Instruments boards to interface the array readout with the existing real-time control system of the Keck Interferometer. The cryogenic dewar, built by IR Laboratories, uses liquid nitrogen and liquid helium to cool the optics and the array, and includes six externally motorized mechanisms for aperture and pinhole control, focus, and optical component selection. The instrument will be assembled and tested through the summer of 2002, and is planned to be deployed as part of the Keck Interferometer Nulling experiment in 2003.

  11. System Architecture of the Dark Energy Survey Camera Readout Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Theresa; /FERMILAB; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; /Barcelona, IFAE; Chappa, Steve; /Fermilab; de Vicente, Juan; /Madrid, CIEMAT; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; /Fermilab; Martinez, Gustavo; /Madrid, CIEMAT; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  12. Hologram synthesis of three-dimensional real objects using portable integral imaging camera.

    Science.gov (United States)

    Lee, Sung-Keun; Hong, Sung-In; Kim, Yong-Soo; Lim, Hong-Gi; Jo, Na-Young; Park, Jae-Hyeung

    2013-10-01

    We propose a portable hologram capture system based on integral imaging. An integral imaging camera with an integrated micro lens array captures spatio-angular light ray distribution of the three-dimensional scene under incoherent illumination. The captured light ray distribution is then processed to synthesize corresponding hologram. Experimental results show that the synthesized hologram is optically reconstructed successfully, demonstrating accommodation and motion parallax of the reconstructed three-dimensional scene.

  13. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  14. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  15. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  16. Shortest Paths in Microseconds

    CERN Document Server

    Agarwal, Rachit; Godfrey, P Brighten; Zhao, Ben Y

    2013-01-01

    Computing shortest paths is a fundamental primitive for several social network applications including socially-sensitive ranking, location-aware search, social auctions and social network privacy. Since these applications compute paths in response to a user query, the goal is to minimize latency while maintaining feasible memory requirements. We present ASAP, a system that achieves this goal by exploiting the structure of social networks. ASAP preprocesses a given network to compute and store a partial shortest path tree (PSPT) for each node. The PSPTs have the property that for any two nodes, each edge along the shortest path is with high probability contained in the PSPT of at least one of the nodes. We show that the structure of social networks enable the PSPT of each node to be an extremely small fraction of the entire network; hence, PSPTs can be stored efficiently and each shortest path can be computed extremely quickly. For a real network with 5 million nodes and 69 million edges, ASAP computes a short...

  17. Faster Replacement Paths

    CERN Document Server

    Williams, Virginia Vassilevska

    2010-01-01

    The replacement paths problem for directed graphs is to find for given nodes s and t and every edge e on the shortest path between them, the shortest path between s and t which avoids e. For unweighted directed graphs on n vertices, the best known algorithm runtime was \\tilde{O}(n^{2.5}) by Roditty and Zwick. For graphs with integer weights in {-M,...,M}, Weimann and Yuster recently showed that one can use fast matrix multiplication and solve the problem in O(Mn^{2.584}) time, a runtime which would be O(Mn^{2.33}) if the exponent \\omega of matrix multiplication is 2. We improve both of these algorithms. Our new algorithm also relies on fast matrix multiplication and runs in O(M n^{\\omega} polylog(n)) time if \\omega>2 and O(n^{2+\\eps}) for any \\eps>0 if \\omega=2. Our result shows that, at least for small integer weights, the replacement paths problem in directed graphs may be easier than the related all pairs shortest paths problem in directed graphs, as the current best runtime for the latter is \\Omega(n^{2.5...

  18. Nonlinear Reconstruction

    CERN Document Server

    Zhu, Hong-Ming; Pen, Ue-Li; Chen, Xuelei; Yu, Hao-Ran

    2016-01-01

    We present a direct approach to non-parametrically reconstruct the linear density field from an observed non-linear map. We solve for the unique displacement potential consistent with the non-linear density and positive definite coordinate transformation using a multigrid algorithm. We show that we recover the linear initial conditions up to $k\\sim 1\\ h/\\mathrm{Mpc}$ with minimal computational cost. This reconstruction approach generalizes the linear displacement theory to fully non-linear fields, potentially substantially expanding the BAO and RSD information content of dense large scale structure surveys, including for example SDSS main sample and 21cm intensity mapping.

  19. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  20. Multi-spectral camera development

    CSIR Research Space (South Africa)

    Holloway, M

    2012-10-01

    Full Text Available stream_source_info Holloway_2012.pdf.txt stream_content_type text/plain stream_size 6209 Content-Encoding ISO-8859-1 stream_name Holloway_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Multi-Spectral Camera... Development 4th Biennial Conference Presented by Mark Holloway 10 October 2012 Fused image ? Red, Green and Blue Applications of the Multi-Spectral Camera ? CSIR 2012 Slide 2 Green and Blue, Near Infrared (IR) RED Applications of the Multi...

  1. Smoothing of Piecewise Linear Paths

    Directory of Open Access Journals (Sweden)

    Michel Waringo

    2008-11-01

    Full Text Available We present an anytime-capable fast deterministic greedy algorithm for smoothing piecewise linear paths consisting of connected linear segments. With this method, path points with only a small influence on path geometry (i.e. aligned or nearly aligned points are successively removed. Due to the removal of less important path points, the computational and memory requirements of the paths are reduced and traversing the path is accelerated. Our algorithm can be used in many different applications, e.g. sweeping, path finding, programming-by-demonstration in a virtual environment, or 6D CNC milling. The algorithm handles points with positional and orientational coordinates of arbitrary dimension.

  2. Angular Sensitivity of Gated Micro-Channel Plate Framing Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Landen, O L; Lobban, A; Tutt, T; Bell, P M; Costa, R; Ze, F

    2000-07-24

    Gated, microchannel-plate-based (MCP) framing cameras have been deployed worldwide for 0.2 - 9 keV x-ray imaging and spectroscopy of transient plasma phenomena. For a variety of spectroscopic and imaging applications, the angular sensitivity of MCPs must be known for correctly interpreting the data. We present systematic measurements of angular sensitivity at discrete relevant photon energies and arbitrary MCP gain. The results can been accurately predicted by using a simple 2D approximation to the 3D MCP geometry and by averaging over all possible photon ray paths.

  3. Fuzzy Visual Path Following by a Mobile Robot

    Science.gov (United States)

    Hamissi, A.; Bazoula, A.

    2008-06-01

    We present in this work a variant of a visual navigation method developed for path following by a nonholonomic mobile robot moving in an environment free of obstacles. Only an embedded CCD camera is used for perception. The integration of perception and action leads us to develop firstly a method of extraction of the useful information from each acquired image, secondly a control approach using fuzzy logic.

  4. TRACING EFFICIENT PATH USING WEB PATH TRACING

    Directory of Open Access Journals (Sweden)

    L.K. Joshila Grace

    2014-01-01

    Full Text Available In the fast improving society, people depend on online purchase of goods than spending time physically. So there are lots of resources emerged for this online buying and selling of materials. Efficient and attractive web sites would be the best to sell the goods to people. To know whether a web site is reaching the mind of the customers or not, a high speed analysis is done periodically by the web developers. This works helps for the web site developers in knowing the weaker and stronger section of their web site. Parameters like frequency and utility are used for quantitative and qualitative analysis respectively. Addition to this down loads, book marks and the like/dislike of the particular web site is also considered. A new web path trace tree structure is implemented. A mathematical implementation is done to predict the efficient pattern used by the web site visitors.

  5. Euclidean shortest paths

    CERN Document Server

    Li, Fajie

    2011-01-01

    This unique text/reference reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. Discussing each concept and algorithm in depth, the book includes mathematical proofs for many of the given statements. Topics and features: provides theoretical and programming exercises at the end of each chapter; presents a thorough introduction to shortest paths in Euclidean geometry, and the class of algorithms called rubberband algorithms; discusses algorithms for calculating exact or approximate ESPs i

  6. Radiative Transport Based Flame Volume Reconstruction from Videos.

    Science.gov (United States)

    Shen, Liang; Zhu, Dengming; Nadeem, Saad; Wang, Zhaoqi; Kaufman, Arie E

    2017-06-06

    We introduce a novel approach for flame volume reconstruction from videos using inexpensive charge-coupled device (CCD) consumer cameras. The approach includes an economical data capture technique using inexpensive CCD cameras. Leveraging the smear feature of the CCD chip, we present a technique for synchronizing CCD cameras while capturing flame videos from different views. Our reconstruction is based on the radiative transport equation which enables complex phenomena such as emission, extinction, and scattering to be used in the rendering process. Both the color intensity and temperature reconstructions are implemented using the CUDA parallel computing framework, which provides real-time performance and allows visualization of reconstruction results after every iteration. We present the results of our approach using real captured data and physically-based simulated data. Finally, we also compare our approach against the other state-of-the-art flame volume reconstruction methods and demonstrate the efficacy and efficiency of our approach in four different applications: (1) rendering of reconstructed flames in virtual environments, (2) rendering of reconstructed flames in augmented reality, (3) flame stylization, and (4) reconstruction of other semitransparent phenomena.

  7. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    Science.gov (United States)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be

  8. ACL reconstruction - discharge

    Science.gov (United States)

    Anterior cruciate ligament reconstruction - discharge; ACL reconstruction - discharge ... had surgery to reconstruct your anterior cruciate ligament (ACL). The surgeon drilled holes in the bones of ...

  9. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    Directory of Open Access Journals (Sweden)

    Miklas S. Kristoffersen

    2016-01-01

    Full Text Available The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.

  10. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  11. OSIRIS camera barrel optomechanical design

    Science.gov (United States)

    Farah, Alejandro; Tejada, Carlos; Gonzalez, Jesus; Cobos, Francisco J.; Sanchez, Beatriz; Fuentes, Javier; Ruiz, Elfego

    2004-09-01

    A Camera Barrel, located in the OSIRIS imager/spectrograph for the Gran Telescopio Canarias (GTC), is described in this article. The barrel design has been developed by the Institute for Astronomy of the University of Mexico (IA-UNAM), in collaboration with the Institute for Astrophysics of Canarias (IAC), Spain. The barrel is being manufactured by the Engineering Center for Industrial Development (CIDESI) at Queretaro, Mexico. The Camera Barrel includes a set of eight lenses (three doublets and two singlets), with their respective supports and cells, as well as two subsystems: the Focusing Unit, which is a mechanism that modifies the first doublet relative position; and the Passive Displacement Unit (PDU), which uses the third doublet as thermal compensator to maintain the camera focal length and image quality when the ambient temperature changes. This article includes a brief description of the scientific instrument; describes the design criteria related with performance justification; and summarizes the specifications related with misalignment errors and generated stresses. The Camera Barrel components are described and analytical calculations, FEA simulations and error budgets are also included.

  12. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the s...

  13. Linearisation of RGB camera responses for quantitative image analysis of visible and UV photography: a comparison of two techniques.

    Directory of Open Access Journals (Sweden)

    Jair E Garcia

    Full Text Available Linear camera responses are required for recovering the total amount of incident irradiance, quantitative image analysis, spectral reconstruction from camera responses and characterisation of spectral sensitivity curves. Two commercially-available digital cameras equipped with Bayer filter arrays and sensitive to visible and near-UV radiation were characterised using biexponential and Bézier curves. Both methods successfully fitted the entire characteristic curve of the tested devices, allowing for an accurate recovery of linear camera responses, particularly those corresponding to the middle of the exposure range. Nevertheless the two methods differ in the nature of the required input parameters and the uncertainty associated with the recovered linear camera responses obtained at the extreme ends of the exposure range. Here we demonstrate the use of both methods for retrieving information about scene irradiance, describing and quantifying the uncertainty involved in the estimation of linear camera responses.

  14. Three-dimensional temperature field measurement of flame using a single light field camera.

    Science.gov (United States)

    Sun, Jun; Xu, Chuanlong; Zhang, Biao; Hossain, Md Moinul; Wang, Shimin; Qi, Hong; Tan, Heping

    2016-01-25

    Compared with conventional camera, the light field camera takes the advantage of being capable of recording the direction and intensity information of each ray projected onto the CCD (charge couple device) sensor simultaneously. In this paper, a novel method is proposed for reconstructing three-dimensional (3-D) temperature field of a flame based on a single light field camera. A radiative imaging of a single light field camera is also modeled for the flame. In this model, the principal ray represents the beam projected onto the pixel of the CCD sensor. The radiation direction of the ray from the flame outside the camera is obtained according to thin lens equation based on geometrical optics. The intensities of the principal rays recorded by the pixels on the CCD sensor are mathematically modeled based on radiative transfer equation. The temperature distribution of the flame is then reconstructed by solving the mathematical model through the use of least square QR-factorization algorithm (LSQR). The numerical simulations and experiments are carried out to investigate the validity of the proposed method. The results presented in this study show that the proposed method is capable of reconstructing the 3-D temperature field of a flame.

  15. A Compton camera for spectroscopic imaging from 100keV to 1MeV

    Science.gov (United States)

    Earnhart, Jonathan Raby Dewitt

    The objective of this work is to investigate Compton camera technology for spectroscopic imaging of gamma rays in the 100keV to 1MeV range. An efficient, specific purpose Monte Carlo code was developed to investigate the image formation process in Compton cameras. The code is based on a pathway sampling technique with extensive use of variance reduction techniques. The code includes detailed Compton scattering physics, including incoherent scattering functions, Doppler broadening, and multiple scattering. Experiments were performed with two different camera configurations for a scene containing a 75Se source and a 137Cs source. The first camera was based on a fixed silicon detector in the front plane and a CdZnTe detector mounted in the stage. The second camera configuration was based on two CdZnTe detectors. Both systems were able to reconstruct images of 75Se, using the 265keV line, and 137Cs, using the 662keV line. Only the silicon-CdZnTe camera was able to resolve the low intensity 400keV line of 75Se. Neither camera was able to reconstruct the 75Se source location using the 136keV line. The energy resolution of the silicon-CdZnTe camera system was 4% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 10° for a source on the camera axis and 14° for a source 30° off axis. Typical detector pair efficiencies were measured as 3 x 10-11 at 662keV. The dual CdZnTe camera had an energy resolution of 3.2% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 8° for a source on the camera axis and 12° for a source 20° off axis. Typical detector pair efficiencies were measured as 7 x 10-11 at 662keV. Of the two prototype camera configurations tested, the silicon-CdZnTe configuration had superior imaging characteristics. This configuration is less sensitive to effects caused by source decay cascades and random

  16. An Unplanned Path

    Science.gov (United States)

    McGarvey, Lynn M.; Sterenberg, Gladys Y.; Long, Julie S.

    2013-01-01

    The authors elucidate what they saw as three important challenges to overcome along the path to becoming elementary school mathematics teacher leaders: marginal interest in math, low self-confidence, and teaching in isolation. To illustrate how these challenges were mitigated, they focus on the stories of two elementary school teachers--Laura and…

  17. Reparametrizations of Continuous Paths

    DEFF Research Database (Denmark)

    Fahrenberg, Uli; Raussen, Martin

    2007-01-01

    compare it to the distributive lattice of countable subsets of the unit interval. The results obtained are used to analyse the space of traces in a topological space, i.e., the space of continuous paths up to reparametrization equivalence. This space is shown to be homeomorphic to the space of regular...

  18. MEASURING PATH DEPENDENCY

    Directory of Open Access Journals (Sweden)

    Peter Juhasz

    2017-03-01

    Full Text Available While risk management gained popularity during the last decades even some of the basic risk types are still far out of focus. One of these is path dependency that refers to the uncertainty of how we reach a certain level of total performance over time. While decision makers are careful in accessing how their position will look like the end of certain periods, little attention is given how they will get there through the period. The uncertainty of how a process will develop across a shorter period of time is often “eliminated” by simply choosing a longer planning time interval, what makes path dependency is one of the most often overlooked business risk types. After reviewing the origin of the problem we propose and compare seven risk measures to access path. Traditional risk measures like standard deviation of sub period cash flows fail to capture this risk type. We conclude that in most cases considering the distribution of the expected cash flow effect caused by the path dependency may offer the best method, but we may need to use several measures at the same time to include all the optimisation limits of the given firm

  19. Hyperspectral imaging using a color camera and its application for pathogen detection

    Science.gov (United States)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  20. Kinect v2 and RGB Stereo Cameras Integration for Depth Map Enhancement

    Science.gov (United States)

    Ravanelli, R.; Nascetti, A.; Crespi, M.

    2016-06-01

    Today range cameras are widespread low-cost sensors based on two different principles of operation: we can distinguish between Structured Light (SL) range cameras (Kinect v1, Structure Sensor, ...) and Time Of Flight (ToF) range cameras (Kinect v2, ...). Both the types are easy to use 3D scanners, able to reconstruct dense point clouds at high frame rate. However the depth maps obtained are often noisy and not enough accurate, therefore it is generally essential to improve their quality. Standard RGB cameras can be a valuable solution to solve such issue. The aim of this paper is therefore to evaluate the integration feasibility of these two different 3D modelling techniques, characterized by complementary features and based on standard low-cost sensors. For this purpose, a 3D model of a DUPLOTM bricks construction was reconstructed both with the Kinect v2 range camera and by processing one stereo pair acquired with a Canon Eos 1200D DSLR camera. The scale of the photgrammetric model was retrieved from the coordinates measured by Kinect v2. The preliminary results are encouraging and show that the foreseen integration could lead to an higher metric accuracy and a major level of completeness with respect to that obtained by using only separated techniques.

  1. Breast Reconstruction Alternatives

    Science.gov (United States)

    ... Breast Reconstruction Surgery Breast Cancer Breast Reconstruction Surgery Breast Reconstruction Alternatives Some women who have had a ... chest. What if I choose not to get breast reconstruction? Some women decide not to have any ...

  2. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  3. Reconstructing the temporal progression of HIV-1 immune response pathways

    Science.gov (United States)

    Jain, Siddhartha; Arrais, Joel; Venkatachari, Narasimhan J.; Ayyavoo, Velpandi; Bar-Joseph, Ziv

    2016-01-01

    Motivation: Most methods for reconstructing response networks from high throughput data generate static models which cannot distinguish between early and late response stages. Results: We present TimePath, a new method that integrates time series and static datasets to reconstruct dynamic models of host response to stimulus. TimePath uses an Integer Programming formulation to select a subset of pathways that, together, explain the observed dynamic responses. Applying TimePath to study human response to HIV-1 led to accurate reconstruction of several known regulatory and signaling pathways and to novel mechanistic insights. We experimentally validated several of TimePaths’ predictions highlighting the usefulness of temporal models. Availability and Implementation: Data, Supplementary text and the TimePath software are available from http://sb.cs.cmu.edu/timepath Contact: zivbj@cs.cmu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307624

  4. Maxillary reconstruction

    Directory of Open Access Journals (Sweden)

    Brown James

    2007-12-01

    Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.

  5. The ITER Radial Neutron Camera Detection System

    Science.gov (United States)

    Marocco, D.; Belli, F.; Bonheure, G.; Esposito, B.; Kaschuck, Y.; Petrizzi, L.; Riva, M.

    2008-03-01

    A multichannel neutron detection system (Radial Neutron Camera, RNC) will be installed on the ITER equatorial port plug 1 for total neutron source strength, neutron emissivity/ion temperature profiles and nt/nd ratio measurements [1]. The system is composed by two fan shaped collimating structures: an ex-vessel structure, looking at the plasma core, containing tree sets of 12 collimators (each set lying on a different toroidal plane), and an in-vessel structure, containing 9 collimators, for plasma edge coverage. The RNC detecting system will work in a harsh environment (neutron fiux up to 108-109 n/cm2 s, magnetic field >0.5 T or in-vessel detectors), should provide both counting and spectrometric information and should be flexible enough to cover the high neutron flux dynamic range expected during the different ITER operation phases. ENEA has been involved in several activities related to RNC design and optimization [2,3]. In the present paper the up-to-date design and the neutron emissivity reconstruction capabilities of the RNC will be described. Different options for detectors suitable for spectrometry and counting (e.g. scintillators and diamonds) focusing on the implications in terms of overall RNC performance will be discussed. The increase of the RNC capabilities offered by the use of new digital data acquisition systems will be also addressed.

  6. 4π FOV compact Compton camera for nuclear material investigations

    Science.gov (United States)

    Lee, Wonho; Lee, Taewoong

    2011-10-01

    A compact Compton camera with a 4π field of view (FOV) was manufactured using the design parameters optimized with the effective choice of gamma-ray interaction order determined from a Monte Carlo simulation. The camera consisted of six CsI(Na) planar scintillators with a pixelized structure that was coupled to position sensitive photomultiplier tubes (H8500) consisting of multiple anodes connected to custom-made circuits. The size of the scintillator and each pixel was 4.4×4.4×0.5 and 0.2×0.2×0.5 cm, respectively. The total size of each detection module was only 5×5×6 cm and the distance between the detector modules was approximately 10 cm to maximize the camera performance, as calculated by the simulation. Therefore, the camera is quite portable for examining nuclear materials in areas, such as harbors or nuclear power plants. The non-uniformity of the multi-anode PMTs was corrected using a novel readout circuit. Amplitude information of the signals from the electronics attached to the scintillator-coupled multi-anode PMTs was collected using a data acquisition board (cDAQ-9178), and the timing information was sent to a FPGA (SPARTAN3E). The FPGA picked the rising edges of the timing signals, and compared the edges of the signals from six detection modules to select the coincident signal from a Compton pair only. The output of the FPGA triggered the DAQ board to send the effective Compton events to a computer. The Compton image was reconstructed, and the performance of the 4π FOV Compact camera was examined.

  7. Performance evaluation of MACACO: a multilayer Compton camera

    Science.gov (United States)

    Muñoz, Enrique; Barrio, John; Etxebeste, Ane; Ortega, Pablo G.; Lacasta, Carlos; Oliver, Josep F.; Solaz, Carles; Llosá, Gabriela

    2017-09-01

    Compton imaging devices have been proposed and studied for a wide range of applications. We have developed a Compton camera prototype which can be operated with two or three detector layers based on monolithic lanthanum bromide (LaBr3 ) crystals coupled to silicon photomultipliers (SiPMs), to be used for proton range verification in hadron therapy. In this work, we present the results obtained with our prototype in laboratory tests with radioactive sources and in simulation studies. Images of a 22 Na and an 88 Y radioactive sources have been successfully reconstructed. The full width half maximum of the reconstructed images is below 4 mm for a 22 Na source at a distance of 5 cm.

  8. Performance evaluation of MACACO: a multilayer Compton camera.

    Science.gov (United States)

    Muñoz, Enrique; Barrio, John; Etxebeste, Ane; Ortega, Pablo G; Lacasta, Carlos; Oliver, Josep F; Solaz, Carles; Llosá, Gabriela

    2017-08-22

    Compton imaging devices have been proposed and studied for a wide range of applications. We have developed a Compton camera prototype which can be operated with two or three detector layers based on monolithic lanthanum bromide ([Formula: see text]) crystals coupled to silicon photomultipliers (SiPMs), to be used for proton range verification in hadron therapy. In this work, we present the results obtained with our prototype in laboratory tests with radioactive sources and in simulation studies. Images of a [Formula: see text]Na and an [Formula: see text]Y radioactive sources have been successfully reconstructed. The full width half maximum of the reconstructed images is below 4 mm for a [Formula: see text]Na source at a distance of 5 cm.

  9. AUTOMATIC CAMERA ORIENTATION AND STRUCTURE RECOVERY WITH SAMANTHA

    Directory of Open Access Journals (Sweden)

    R. Gherardi

    2012-09-01

    Full Text Available SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  10. Efficient Unbiased Rendering using Enlightened Local Path Sampling

    DEFF Research Database (Denmark)

    Kristensen, Anders Wang

    . The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description......, such as the location of the light sources or cameras, or the re flection models at each point. In this work we explore new methods of importance sampling paths. Our idea is to analyze the scene before rendering and compute various statistics that we use to improve importance sampling. The first of these are adjoint...

  11. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  12. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  13. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  14. Noise evaluation of Compton camera imaging for proton therapy

    Science.gov (United States)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llosá, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of

  15. Noise evaluation of Compton camera imaging for proton therapy.

    Science.gov (United States)

    Ortega, P G; Torres-Espallardo, I; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-03-07

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of

  16. An optical metasurface planar camera

    CERN Document Server

    Arbabi, Amir; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are 2D arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optical design by enabling complex low cost systems where multiple metasurfaces are lithographically stacked on top of each other and are integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here, we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has an f-number of 0.9, an angle-of-view larger than 60$^\\circ$$\\times$60$^\\circ$, and operates at 850 nm wavelength with large transmission. The camera exhibits high image quality, which indicates the potential of this technology to produce a paradigm shift in future designs of imaging systems for microscopy, photograp...

  17. Combustion pinhole-camera system

    Science.gov (United States)

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  18. Cosmological Feynman Paths

    CERN Document Server

    Chew, Geoffrey F

    2008-01-01

    Arrowed-time divergence-free rules or cosmological quantum dynamics are formulated through stepped Feynman paths across macroscopic slices of Milne spacetime. Slice boundaries house totally-relativistic rays representing elementary entities--preons. Total relativity and the associated preon Fock space, despite distinction from special relativity (which lacks time arrow), are based on the Lorentz group. Each path is a set of cubic vertices connected by straight, directed and stepped arcs that carry inertial, electromagnetic and gravitational action. The action of an arc step comprises increments each bounded by Planck's constant. Action from extremely-distant sources is determined by universe mean energy density. Identifying the arc-step energy that determines inertial action with that determining gravitational action establishes both arc-step length and universe density. Special relativity is accurate for physics at laboratory spacetime scales far below that of Hubble and far above that of Planck.

  19. Nonadiabatic transition path sampling

    Science.gov (United States)

    Sherman, M. C.; Corcelli, S. A.

    2016-07-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  20. Mirrored Light Field Video Camera Adapter

    OpenAIRE

    Tsai, Dorian; Dansereau, Donald G.; Martin, Steve; Corke, Peter

    2016-01-01

    This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible. Mirrors of different shape and orientation reflect the scene into an upwards-facing camera to create an array of virtual cameras with overlapping field of view at specified depths, and deliver video frame rate light fields. We describe the design, construction, decoding and calibration processes of our mirror-based light field camera adapter in preparation ...

  1. Automated Placement of Multiple Stereo Cameras

    OpenAIRE

    Malik, Rahul; Bajcsy, Peter

    2008-01-01

    International audience; This paper presents a simulation framework for multiple stereo camera placement. Multiple stereo camera systems are becoming increasingly popular these days. Applications of multiple stereo camera systems such as tele-immersive systems enable cloning of dynamic scenes in real-time and delivering 3D information from multiple geographic locations to everyone for viewing it in virtual (immersive) 3D spaces. In order to make such multi stereo camera systems ubiquitous, sol...

  2. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  3. Imaging multi-energy gamma-ray fields with a Compton scatter camera

    Science.gov (United States)

    Martin, J. B.; Dogan, N.; Gormley, J. E.; Knoll, G. F.; O'Donnell, M.; Wehe, D. K.

    1994-08-01

    Multi-energy gamma-ray fields have been imaged with a ring Compton scatter camera (RCC). The RCC is intended for industrial applications, where there is a need to image multiple gamma-ray lines from spatially extended sources. To our knowledge, the ability of a Compton scatter camera to perform this task had not previously been demonstrated. Gamma rays with different incident energies are distinguished based on the total energy deposited in the camera elements. For multiple gamma-ray lines, separate images are generated for each line energy. Random coincidences and other interfering interactions have been investigated. Camera response has been characterized for energies from 0.511 to 2.75 MeV. Different gamma-ray lines from extended sources have been measured and images reconstructed using both direct and iterative algorithms.

  4. Implementation of spatial touch system using time-of-flight camera

    Institute of Scientific and Technical Information of China (English)

    AHN Yang-Keun; PARK Young-Choong; CHOI Kwang-Soon; PARK Woo-Choo; SEO Hae-Moon; JUNG Kwang-Mo

    2009-01-01

    Recently developed time-of-flight principle based depth-sensing video camera technologies provide precise per-pixel range data in addition to color video. Such cameras will find application in robotics and vision-based human computer interaction scenarios such as games and gesture input systems. Time-of-flight principle range cameras are becoming more and more available. They promise to make the 3D reconstruction of scenes easier, avoiding the practical issues resulting from 3D imaging techniques based on triangulation or disparity estimation. A spatial touch system was presented which uses a depth-sensing camera to touch spatial objects and details on its implementation, and how this technology will enable new spatial interactions was speculated.

  5. Automatic tracking sensor camera system

    Science.gov (United States)

    Tsuda, Takao; Kato, Daiichiro; Ishikawa, Akio; Inoue, Seiki

    2001-04-01

    We are developing a sensor camera system for automatically tracking and determining the positions of subjects moving in three-dimensions. The system is intended to operate even within areas as large as soccer fields. The system measures the 3D coordinates of the object while driving the pan and tilt movements of camera heads, and the degree of zoom of the lenses. Its principal feature is that it automatically zooms in as the object moves farther away and out as the object moves closer. This maintains the area of the object as a fixed position of the image. This feature makes stable detection by the image processing possible. We are planning to use the system to detect the position of a soccer ball during a soccer game. In this paper, we describe the configuration of the developing automatic tracking sensor camera system. We then give an analysis of the movements of the ball within images of games, the results of experiments on method of image processing used to detect the ball, and the results of other experiments to verify the accuracy of an experimental system. These results show that the system is sufficiently accurate in terms of obtaining positions in three-dimensions.

  6. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  7. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  8. PATHS groundwater hydrologic model

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  9. Error control in the set-up of stereo camera systems for 3d animal tracking

    Science.gov (United States)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  10. 3D point cloud registration based on the assistant camera and Harris-SIFT

    Science.gov (United States)

    Zhang, Yue; Yu, HongYang

    2016-07-01

    3D(Three-Dimensional) point cloud registration technology is the hot topic in the field of 3D reconstruction, but most of the registration method is not real-time and ineffective. This paper proposes a point cloud registration method of 3D reconstruction based on Harris-SIFT and assistant camera. The assistant camera is used to pinpoint mobile 3D reconstruction device, The feature points of images are detected by using Harris operator, the main orientation for each feature point is calculated, and lastly, the feature point descriptors are generated after rotating the coordinates of the descriptors relative to the feature points' main orientations. Experimental results of demonstrate the effectiveness of the proposed method.

  11. Image Based Camera Localization: an Overview

    OpenAIRE

    Wu, Yihong

    2016-01-01

    Recently, virtual reality, augmented reality, robotics, self-driving cars et al attractive much attention of industrial community, in which image based camera localization is a key task. It is urgent to give an overview of image based camera localization. In this paper, an overview of image based camera localization is presented. It will be useful to not only researchers but also engineers.

  12. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  13. 21 CFR 892.1110 - Positron camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  14. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENT OF GENERAL POLICY OR INTERPRETATION AND... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  15. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  16. Path planning in dynamic environments

    NARCIS (Netherlands)

    Berg, J.P. van den

    2007-01-01

    Path planning plays an important role in various fields of application, such as CAD design, computer games and virtual environments, molecular biology, and robotics. In its most general form, the path planning problem is formulated as finding a collision-free path for a moving entity between a start

  17. Afghanistan Reconstruction

    Institute of Scientific and Technical Information of China (English)

    Fu Xiaoqiang

    2006-01-01

    @@ The Karzai regime has made some progress over the past four years and a half in the post-war reconstruction.However, Taliban's destruction and drug economy are still having serious impacts on the security and stability of Afghanistan.Hence the settlement of the two problems has become a crux of affecting the country' s future.Moreover, the Karzai regime is yet to handle a series of hot potatoes in the fields of central government' s authority, military and police building-up and foreign relations as well.

  18. Mini gamma camera, camera system and method of use

    Science.gov (United States)

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  19. Tree STEM Reconstruction Using Vertical Fisheye Images: a Preliminary Study

    Science.gov (United States)

    Berveglieri, A.; Tommaselli, A. M. G.

    2016-06-01

    A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM) technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.

  20. TREE STEM RECONSTRUCTION USING VERTICAL FISHEYE IMAGES: A PRELIMINARY STUDY

    Directory of Open Access Journals (Sweden)

    A. Berveglieri

    2016-06-01

    Full Text Available A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.

  1. Spectrometry with consumer-quality CMOS cameras.

    Science.gov (United States)

    Scheeline, Alexander

    2015-01-01

    Many modern spectrometric instruments use diode arrays, charge-coupled arrays, or CMOS cameras for detection and measurement. As portable or point-of-use instruments are desirable, one would expect that instruments using the cameras in cellular telephones and tablet computers would be the basis of numerous instruments. However, no mass market for such devices has yet developed. The difficulties in using megapixel CMOS cameras for scientific measurements are discussed, and promising avenues for instrument development reviewed. Inexpensive alternatives to use of the built-in camera are also mentioned, as the long-term question is whether it is better to overcome the constraints of CMOS cameras or to bypass them.

  2. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  3. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    OpenAIRE

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P. T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short conf...

  4. Automatic calibration method for plenoptic camera

    Science.gov (United States)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  5. Task analysis of laparoscopic camera control schemes.

    Science.gov (United States)

    Ellis, R Darin; Munaco, Anthony J; Reisner, Luke A; Klein, Michael D; Composto, Anthony M; Pandya, Abhilash K; King, Brady W

    2016-12-01

    Minimally invasive surgeries rely on laparoscopic camera views to guide the procedure. Traditionally, an expert surgical assistant operates the camera. In some cases, a robotic system is used to help position the camera, but the surgeon is required to direct all movements of the system. Some prior research has focused on developing automated robotic camera control systems, but that work has been limited to rudimentary control schemes due to a lack of understanding of how the camera should be moved for different surgical tasks. This research used task analysis with a sample of eight expert surgeons to discover and document several salient methods of camera control and their related task contexts. Desired camera placements and behaviours were established for two common surgical subtasks (suturing and knot tying). The results can be used to develop better robotic control algorithms that will be more responsive to surgeons' needs. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  7. A high-resolution SWIR camera via compressed sensing

    Science.gov (United States)

    McMackin, Lenore; Herman, Matthew A.; Chatterjee, Bill; Weldon, Matt

    2012-06-01

    Images from a novel shortwave infrared (SWIR, 900 nm to 1.7 μm) camera system are presented. Custom electronics and software are combined with a digital micromirror device (DMD) and a single-element sensor; the latter are commercial off-the-shelf devices, which together create a lower-cost imaging system than is otherwise available in this wavelength regime. A compressive sensing (CS) encoding schema is applied to the DMD to modulate the light that has entered the camera. This modulated light is directed to a single-element sensor and an ensemble of measurements is collected. With the data ensemble and knowledge of the CS encoding, images are computationally reconstructed. The hardware and software combination makes it possible to create images with the resolution of the DMD while employing a substantially lower-cost sensor subsystem than would otherwise be required by the use of traditional focal plane arrays (FPAs). In addition to the basic camera architecture, we also discuss a technique that uses the adaptive functionality of the DMD to search and identify regions of interest. We demonstrate adaptive CS in solar exclusion experiments where bright pixels, which would otherwise reduce dynamic range in the images, are automatically removed.

  8. Lexicographic Path Induction

    DEFF Research Database (Denmark)

    Schürmann, Carsten; Sarnat, Jeffrey

    2009-01-01

    Programming languages theory is full of problems that reduce to proving the consistency of a logic, such as the normalization of typed lambda-calculi, the decidability of equality in type theory, equivalence testing of traces in security, etc. Although the principle of transfinite induction...... an induction principle that combines the comfort of structural induction with the expressive strength of transfinite induction. Using lexicographic path induction, we give a consistency proof of Martin-Löf’s intuitionistic theory of inductive definitions. The consistency of Heyting arithmetic follows directly...

  9. JAVA PathFinder

    Science.gov (United States)

    Mehhtz, Peter

    2005-01-01

    JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

  10. Rocket Flight Path

    Directory of Open Access Journals (Sweden)

    Jamie Waters

    2014-09-01

    Full Text Available This project uses Newton’s Second Law of Motion, Euler’s method, basic physics, and basic calculus to model the flight path of a rocket. From this, one can find the height and velocity at any point from launch to the maximum altitude, or apogee. This can then be compared to the actual values to see if the method of estimation is a plausible. The rocket used for this project is modeled after Bullistic-1 which was launched by the Society of Aeronautics and Rocketry at the University of South Florida.

  11. Path Through the Wheat

    Directory of Open Access Journals (Sweden)

    David Middleton

    2005-01-01

    Full Text Available The hillside’s tidal waves of yellow-green Break downward into full-grown stalks of wheat In which a peasant, shouldering his hoe Passes along a snaking narrow path -- A teeming place through which his hard thighs press And where his head just barely stays above The swaying grain, drunken in abundance, Farm buildings almost floating on the swells Beyond which sea gulls gliding white in air Fly down on out of sight to salty fields, Taking the channel fish off Normandy, A surfeit fit for Eden i...

  12. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  13. On camera-based smoke and gas leakage detection

    Energy Technology Data Exchange (ETDEWEB)

    Nyboe, Hans Olav

    1999-07-01

    Gas detectors are found in almost every part of industry and in many homes as well. An offshore oil or gas platform may host several hundred gas detectors. The ability of the common point and open path gas detectors to detect leakages depends on their location relative to the location of a gas cloud. This thesis describes the development of a passive volume gas detector, that is, one than will detect a leakage anywhere in the area monitored. After the consideration of several detection techniques it was decided to use an ordinary monochrome camera as sensor. Because a gas leakage may perturb the index of refraction, parts of the background appear to be displaced from their true positions, and it is necessary to develop algorithms that can deal with small differences between images. The thesis develops two such algorithms. Many image regions can be defined and several feature values can be computed for each region. The value of the features depends on the pattern in the image regions. The classes studied in this work are: reference, gas, smoke and human activity. Test show that observation belonging to these classes can be classified fairly high accuracy. The features in the feature set were chosen and developed for this particular application. Basically, the features measure the magnitude of pixel differences, size of detected phenomena and image distortion. Interesting results from many experiments are presented. Most important, the experiments show that apparent motion caused by a gas leakage or heat convection can be detected by means of a monochrome camera. Small leakages of methane can be detected at a range of about four metres. Other gases, such as butane, where the densities differ more from the density of air than the density of methane does, can be detected further from the camera. Gas leakages large enough to cause condensation have been detected at a camera distance of 20 metres. 59 refs., 42 figs., 13 tabs.

  14. Cryogenic mechanism for ISO camera

    Science.gov (United States)

    Luciano, G.

    1987-12-01

    The Infrared Space Observatory (ISO) camera configuration, architecture, materials, tribology, motorization, and development status are outlined. The operating temperature is 2 to 3 K, at 2.5 to 18 microns. Selected material is a titanium alloy, with MoS2/TiC lubrication. A stepping motor drives the ball-bearing mounted wheels to which the optical elements are fixed. Model test results are satisfactory, and also confirm the validity of the test facilities, particularly for vibration tests at 4K.

  15. The Flutter Shutter Camera Simulator

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2012-10-01

    Full Text Available The proposed method simulates an embedded flutter shutter camera implemented either analogically or numerically, and computes its performance. The goal of the flutter shutter is to make motion blur invertible, by a "fluttering" shutter that opens and closes on a well chosen sequence of time intervals. In the simulations the motion is assumed uniform, and the user can choose its velocity. Several types of flutter shutter codes are tested and evaluated: the original ones considered by the inventors, the classic motion blur, and finally several analog or numerical optimal codes proposed recently. In all cases the exact SNR of the deconvolved result is also computed.

  16. Image-Based Navigation for the SnowEater Robot Using a Low-Resolution USB Camera

    Directory of Open Access Journals (Sweden)

    Ernesto Rivas

    2015-04-01

    Full Text Available This paper reports on a navigation method for the snow-removal robot called SnowEater. The robot is designed to work autonomously within small areas (around 30 m2 or less following line segment paths. The line segment paths are laid out so as much snow as possible can be cleared from an area. Navigation is accomplished by using an onboard low-resolution USB camera and a small marker located in the area to be cleared. Low-resolution cameras allow only limited localization and present significant errors. However, these errors can be overcome by using an efficient navigation algorithm to exploit the merits of these cameras. For stable robust autonomous snow removal using this limited information, the most reliable data are selected and the travel paths are controlled. The navigation paths are a set of radially arranged line segments emanating from a marker placed in the environment area to be cleared, in a place where it is not covered by snow. With this method, by using a low-resolution camera (640 × 480 pixels and a small marker (100 × 100 mm, the robot covered the testing area following line segments. For a reference angle of 4.5° between line paths, the average results are: 4° for motion on hard floor and 4.8° for motion on compacted snow. The main contribution of this study is the design of a path-following control algorithm capable of absorbing the errors generated by a low-cost camera.

  17. Design of a Compton camera for 3D prompt-{gamma} imaging during ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Roellinghoff, F., E-mail: roelling@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Richard, M.-H., E-mail: mrichard@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Chevallier, M.; Constanzo, J.; Dauvergne, D. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Freud, N. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Henriquet, P.; Le Foulher, F. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Letang, J.M. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Montarou, G. [LPC, CNRS/IN2P3, Clermont-F. University (France); Ray, C.; Testa, E.; Testa, M. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Walenta, A.H. [Uni-Siegen, FB Physik, Emmy-Noether Campus, D-57068 Siegen (Germany)

    2011-08-21

    We investigate, by means of Geant4 simulations, a real-time method to control the position of the Bragg peak during ion therapy, based on a Compton camera in combination with a beam tagging device (hodoscope) in order to detect the prompt gamma emitted during nuclear fragmentation. The proposed set-up consists of a stack of 2 mm thick silicon strip detectors and a LYSO absorber detector. The {gamma} emission points are reconstructed analytically by intersecting the ion trajectories given by the beam hodoscope and the Compton cones given by the camera. The camera response to a polychromatic point source in air is analyzed with regard to both spatial resolution and detection efficiency. Various geometrical configurations of the camera have been tested. In the proposed configuration, for a typical polychromatic photon point source, the spatial resolution of the camera is about 8.3 mm FWHM and the detection efficiency 2.5x10{sup -4} (reconstructable photons/emitted photons in 4{pi}). Finally, the clinical applicability of our system is considered and possible starting points for further developments of a prototype are discussed.

  18. Integration of robust filters and phase unwrapping algorithms for image reconstruction of objects containing height discontinuities.

    Science.gov (United States)

    Weng, Jing-Feng; Lo, Yu-Lung

    2012-05-07

    For 3D objects with height discontinuities, the image reconstruction performance of interferometric systems is adversely affected by the presence of noise in the wrapped phase map. Various schemes have been proposed for detecting residual noise, speckle noise and noise at the lateral surfaces of the discontinuities. However, in most schemes, some noisy pixels are missed and noise detection errors occur. Accordingly, this paper proposes two robust filters (designated as Filters A and B, respectively) for improving the performance of the phase unwrapping process for objects with height discontinuities. Filter A comprises a noise and phase jump detection scheme and an adaptive median filter, while Filter B replaces the detected noise with the median phase value of an N × N mask centered on the noisy pixel. Filter A enables most of the noise and detection errors in the wrapped phase map to be removed. Filter B then detects and corrects any remaining noise or detection errors during the phase unwrapping process. Three reconstruction paths are proposed, Path I, Path II and Path III. Path I combines the path-dependent MACY algorithm with Filters A and B, while Paths II and III combine the path-independent cellular automata (CA) algorithm with Filters A and B. In Path II, the CA algorithm operates on the whole wrapped phase map, while in Path III, the CA algorithm operates on multiple sub-maps of the wrapped phase map. The simulation and experimental results confirm that the three reconstruction paths provide a robust and precise reconstruction performance given appropriate values of the parameters used in the detection scheme and filters, respectively. However, the CA algorithm used in Paths II and III is relatively inefficient in identifying the most suitable unwrapping paths. Thus, of the three paths, Path I yields the lowest runtime.

  19. Preliminary LSF and MTF determination for the stereo camera of the BepiColombo mission

    Science.gov (United States)

    Simioni, Emanuele; Da Deppo, Vania; Naletto, Giampiero; Borrelli, Donato; Dami, Michele; Ficai Veltroni, Iacopo; Tommasi, Leonardo; Cremonese, Gabriele

    2014-08-01

    In the context of a stereo-camera, measuring the image quality allows to define the accuracy of the 3D reconstruction. In fact, depending on the precision of the camera position data, on the kind of reconstruction algorithm, and on the adopted camera model, it determines the vertical accuracy of the reconstructed terrain model. Aim of this work is to describe the results and the method implemented to determine the Line Spread Function (LSF) of the Stereoscopic Channel (STC) of the SIMBIOSYS imaging system for the BepiColombo mission. BepiColombo is the cornerstone mission n.5 of the European Space Agency dedicated to the exploration of the innermost planet of the Solar System, Mercury, and it is expected to be launched in 2016. STC is a double push-frame single-detector camera composed by two identical sub-channels looking at ±21° wrt the nadir direction. STC has been designed so to have many optical elements common to both sub-channels. Also the image focal plane is common to the sub-channels and this permits the use of a single detector for the acquisition of the two images, i.e. one for each viewing direction. Considering the novelty of the design, conceived to sustain a harsh environment and to be as compact as possible, the STC unit is very complex. To obtain the most accurate 3D reconstruction of the Mercury surface, a camera model as precise as possible is needed, and an ad-hoc calibration set-up has been designed to calibrate the instrument both from the usual geometrical and radiometrical points of view and more specifically for the instrument stereo capability. In this context LSF estimation was performed with a new method applying a particular oversampling approach for the curve fitting to determine at first the entire calibration system transfer function and at the end the optical properties of the single instrument.

  20. Computational cameras: convergence of optics and processing.

    Science.gov (United States)

    Zhou, Changyin; Nayar, Shree K

    2011-12-01

    A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

  1. Light field panorama by a plenoptic camera

    Science.gov (United States)

    Xue, Zhou; Baboulaz, Loic; Prandoni, Paolo; Vetterli, Martin

    2013-03-01

    Consumer-grade plenoptic camera Lytro draws a lot of interest from both academic and industrial world. However its low resolution in both spatial and angular domain prevents it from being used for fine and detailed light field acquisition. This paper proposes to use a plenoptic camera as an image scanner and perform light field stitching to increase the size of the acquired light field data. We consider a simplified plenoptic camera model comprising a pinhole camera moving behind a thin lens. Based on this model, we describe how to perform light field acquisition and stitching under two different scenarios: by camera translation or by camera translation and rotation. In both cases, we assume the camera motion to be known. In the case of camera translation, we show how the acquired light fields should be resampled to increase the spatial range and ultimately obtain a wider field of view. In the case of camera translation and rotation, the camera motion is calculated such that the light fields can be directly stitched and extended in the angular domain. Simulation results verify our approach and demonstrate the potential of the motion model for further light field applications such as registration and super-resolution.

  2. A Unifying Theory for Camera Calibration.

    Science.gov (United States)

    Ramalingam, SriKumar; Sturm, Peter

    2016-07-19

    This paper proposes a unified theory for calibrating a wide variety of camera models such as pinhole, fisheye, cata-dioptric, and multi-camera networks. We model any camera as a set of image pixels and their associated camera rays in space. Every pixel measures the light traveling along a (half-) ray in 3-space, associated with that pixel. By this definition, calibration simply refers to the computation of the mapping between pixels and the associated 3D rays. Such a mapping can be computed using images of calibration grids, which are objects with known 3D geometry, taken from unknown positions. This general camera model allows to represent non-central cameras; we also consider two special subclasses, namely central and axial cameras. In a central camera, all rays intersect in a single point, whereas the rays are completely arbitrary in a non-central one. Axial cameras are an intermediate case: the camera rays intersect a single line. In this work, we show the theory for calibrating central, axial and non-central models using calibration grids, which can be either three-dimensional or planar.

  3. Calibration for 3D imaging with a single-pixel camera

    Science.gov (United States)

    Gribben, Jeremy; Boate, Alan R.; Boukerche, Azzedine

    2017-02-01

    Traditional methods for calibrating structured light 3D imaging systems often suffer from various sources of error. By enabling our projector to both project images as well as capture them using the same optical path, we turn our DMD based projector into a dual-purpose projector and single-pixel camera (SPC). A coarse-to-fine SPC scanning technique based on coded apertures was developed to detect calibration target points with sub-pixel accuracy. Our new calibration approach shows improved depth measurement accuracy when used in structured light 3D imaging by reducing cumulative errors caused by multiple imaging paths.

  4. The Zwicky Transient Facility Camera

    Science.gov (United States)

    Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.

    2016-08-01

    The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.

  5. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  6. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  7. Photogrammetric Reconstruction with Bayesian Information

    Science.gov (United States)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Vettore, A.

    2016-06-01

    Nowadays photogrammetry and laser scanning methods are the most wide spread surveying techniques. Laser scanning methods usually allow to obtain more accurate results with respect to photogrammetry, but their use have some issues, e.g. related to the high cost of the instrumentation and the typical need of high qualified personnel to acquire experimental data on the field. Differently, photogrammetric reconstruction can be achieved by means of low cost devices and by persons without specific training. Furthermore, the recent diffusion of smart devices (e.g. smartphones) embedded with imaging and positioning sensors (i.e. standard camera, GNSS receiver, inertial measurement unit) is opening the possibility of integrating more information in the photogrammetric reconstruction procedure, in order to increase its computational efficiency, its robustness and accuracy. In accordance with the above observations, this paper examines and validates new possibilities for the integration of information provided by the inertial measurement unit (IMU) into the photogrammetric reconstruction procedure, and, to be more specific, into the procedure for solving the feature matching and the bundle adjustment problems.

  8. Kinect Fusion improvement using depth camera calibration

    Science.gov (United States)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  9. Volumetric Diffuse Optical Tomography for Small Animals Using a CCD-Camera-Based Imaging System

    Directory of Open Access Journals (Sweden)

    Zi-Jing Lin

    2012-01-01

    Full Text Available We report the feasibility of three-dimensional (3D volumetric diffuse optical tomography for small animal imaging by using a CCD-camera-based imaging system with a newly developed depth compensation algorithm (DCA. Our computer simulations and laboratory phantom studies have demonstrated that the combination of a CCD camera and DCA can significantly improve the accuracy in depth localization and lead to reconstruction of 3D volumetric images. This approach may present great interests for noninvasive 3D localization of an anomaly hidden in tissue, such as a tumor or a stroke lesion, for preclinical small animal models.

  10. Breast Reconstruction with Implants

    Science.gov (United States)

    Breast reconstruction with implants Overview By Mayo Clinic Staff Breast reconstruction is a surgical procedure that restores shape to ... treat or prevent breast cancer. One type of breast reconstruction uses breast implants — silicone devices filled with silicone ...

  11. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  12. Simple method of modelling of digital holograms registering and their optical reconstruction

    Science.gov (United States)

    Evtikhiev, N. N.; Cheremkhin, P. A.; Krasnov, V. V.; Kurbatova, E. A.; Molodtsov, D. Yu; Porshneva, L. A.; Rodin, V. G.

    2016-08-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted.

  13. Path Integrals in Quantum Physics

    CERN Document Server

    Rosenfelder, R

    2012-01-01

    These lectures aim at giving graduate students an introduction to and a working knowledge of path integral methods in a wide variety of fields in physics. Consequently, the lecture notes are organized in three main parts dealing with non-relativistic quantum mechanics, many-body physics and field theory. In the first part the basic concepts of path integrals are developed in the usual heuristic, non-mathematical way followed by standard examples and special applications including numerical evaluation of (euclidean) path integrals by Monte-Carlo methods with a program for the anharmonic oscillator. The second part deals with the application of path integrals in statistical mechanics and many-body problems treating the polaron problem, dissipative quantum systems, path integrals over ordinary and Grassmannian coherent states and perturbation theory for both bosons and fermions. Again a simple Fortran program is included for illustrating the use of strong-coupling methods. Finally, in the third part path integra...

  14. Shortest Paths and Vehicle Routing

    DEFF Research Database (Denmark)

    Petersen, Bjørn

    This thesis presents how to parallelize a shortest path labeling algorithm. It is shown how to handle Chvátal-Gomory rank-1 cuts in a column generation context. A Branch-and-Cut algorithm is given for the Elementary Shortest Paths Problem with Capacity Constraint. A reformulation of the Vehicle R...... Routing Problem based on partial paths is presented. Finally, a practical application of finding shortest paths in the telecommunication industry is shown.......This thesis presents how to parallelize a shortest path labeling algorithm. It is shown how to handle Chvátal-Gomory rank-1 cuts in a column generation context. A Branch-and-Cut algorithm is given for the Elementary Shortest Paths Problem with Capacity Constraint. A reformulation of the Vehicle...

  15. Path integrals for awkward actions

    CERN Document Server

    Amdahl, David

    2016-01-01

    Time derivatives of scalar fields occur quadratically in textbook actions. A simple Legendre transformation turns the lagrangian into a hamiltonian that is quadratic in the momenta. The path integral over the momenta is gaussian. Mean values of operators are euclidian path integrals of their classical counterparts with positive weight functions. Monte Carlo simulations can estimate such mean values. This familiar framework falls apart when the time derivatives do not occur quadratically. The Legendre transformation becomes difficult or so intractable that one can't find the hamiltonian. Even if one finds the hamiltonian, it usually is so complicated that one can't path-integrate over the momenta and get a euclidian path integral with a positive weight function. Monte Carlo simulations don't work when the weight function assumes negative or complex values. This paper solves both problems. It shows how to make path integrals without knowing the hamiltonian. It also shows how to estimate complex path integrals b...

  16. MAGIC-II Camera Slow Control Software

    CERN Document Server

    Steinke, B; Tridon, D Borla

    2009-01-01

    The Imaging Atmospheric Cherenkov Telescope MAGIC I has recently been extended to a stereoscopic system by adding a second 17 m telescope, MAGIC-II. One of the major improvements of the second telescope is an improved camera. The Camera Control Program is embedded in the telescope control software as an independent subsystem. The Camera Control Program is an effective software to monitor and control the camera values and their settings and is written in the visual programming language LabVIEW. The two main parts, the Central Variables File, which stores all information of the pixel and other camera parameters, and the Comm Control Routine, which controls changes in possible settings, provide a reliable operation. A safety routine protects the camera from misuse by accidental commands, from bad weather conditions and from hardware errors by automatic reactions.

  17. Entanglement by Path Identity

    CERN Document Server

    Krenn, Mario; Lahiri, Mayukh; Zeilinger, Anton

    2016-01-01

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional photonic systems, exploiting an idea introduced by the group of Leonard Mandel 25 years ago. The two ingredients are 1) superposition of photon pairs with different origins and 2) aligning photon paths such that they emerge from the same output mode. We explain examples for the creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces -- starting only from separable (non-entangled) photon pairs. For two photons, we show how arbitrary high-dimensional entanglement can be created. Interestingly, a common source for two-photon polarization entanglement is found as a special case. We discovered the technique by analyzing the output of a computer algorithm designing quantum experiments, and generalized it ...

  18. innovation path exploration

    Directory of Open Access Journals (Sweden)

    Li Jian

    2016-01-01

    Full Text Available The world has entered the information age, all kinds of information technologies such as cloud technology, big data technology are in rapid development, and the “Internet plus” appeared. The main purpose of “Internet plus” is to provide an opportunity for the further development of the enterprise, the enterprise technology, business and other aspects of factors combine. For enterprises, grasp the “Internet plus” the impact of the market economy will undoubtedly pave the way for the future development of enterprises. This paper will be on the innovation path of the enterprise management “Internet plus” era tied you study, hope to be able to put forward some opinions and suggestions.

  19. Propagators and path integrals

    Energy Technology Data Exchange (ETDEWEB)

    Holten, J.W. van

    1995-08-22

    Path-integral expressions for one-particle propagators in scalar and fermionic field theories are derived, for arbitrary mass. This establishes a direct connection between field theory and specific classical point-particle models. The role of world-line reparametrization invariance of the classical action and the implementation of the corresponding BRST-symmetry in the quantum theory are discussed. The presence of classical world-line supersymmetry is shown to lead to an unwanted doubling of states for massive spin-1/2 particles. The origin of this phenomenon is traced to a `hidden` topological fermionic excitation. A different formulation of the pseudo-classical mechanics using a bosonic representation of {gamma}{sub 5} is shown to remove these extra states at the expense of losing manifest supersymmetry. (orig.).

  20. Reconstructive Urology

    Directory of Open Access Journals (Sweden)

    Fikret Fatih Önol

    2014-11-01

    Full Text Available In the treatment of urethral stricture, Buccal Mucosa Graft (BMG and reconstruction is applied with different patch techniques. Recently often prefered, this approach is, in bulber urethra strictures of BMG’s; by “ventral onley”, in pendulous urethra because of thinner spingiosis body, which provides support and nutrition of graft; by means of “dorsal inley” being anastomosis. In the research that Cordon et al. did, they compared conventional BMJ “onley” urethroplast and “pseudo-spongioplasty” which base on periurethral vascular tissues to be nourished by closing onto graft. In repairment of front urethras that spongiosis supportive tissue is insufficient, this method is defined as peripheral dartos [çevre dartos?] and buck’s fascia being mobilized and being combined on BMG patch. Between the years 2007 and 2012, assessment of 56 patients with conventional “ventral onley” BMG urethroplast and 46 patients with “pseudo-spongioplasty” were reported to have similar success rates (80% to 84% in 3.5 year follow-up on average. While 74% of the patients that were applied pseudo-spongioplasty had disease present at distal urethra (pendulous, bulbopendulous, 82% of the patients which were applied conventional onley urethroplast had stricture at proximal (bulber urethra yet. Also lenght of the stricture at the pseudo-spongioplasty group was longer in a statistically significant way (5.8 cm to 4.7 cm on average, p=0.028. This study which Cordon et al. did, shows that conditions in which conventional sponjiyoplasti is not possible, periurethral vascular tissues are adequate to nourish BMG. Even it is an important technique in terms of bringing a new point of view to today’s practice, data especially about complications that may show up after pseudo-spongioplasty usage on long distal strictures (e.g. appearance of urethral diverticulum is not reported. Along with this we think that, providing an oppurtinity to patch directly

  1. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    Science.gov (United States)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  2. ONLY IMAGE BASED FOR THE 3D METRIC SURVEY OF GOTHIC STRUCTURES BY USING FRAME CAMERAS AND PANORAMIC CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Pérez Ramos

    2016-06-01

    Full Text Available Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  3. MapMaker and PathTracer for tracking carbon in genome-scale metabolic models.

    Science.gov (United States)

    Tervo, Christopher J; Reed, Jennifer L

    2016-05-01

    Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites.

  4. Path integral in Snyder space

    Energy Technology Data Exchange (ETDEWEB)

    Mignemi, S., E-mail: smignemi@unica.it [Dipartimento di Matematica e Informatica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato (Italy); Štrajn, R. [Dipartimento di Matematica e Informatica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato (Italy)

    2016-04-29

    The definition of path integrals in one- and two-dimensional Snyder space is discussed in detail both in the traditional setting and in the first-order formalism of Faddeev and Jackiw. - Highlights: • The definition of the path integral in Snyder space is discussed using phase space methods. • The same result is obtained in the first-order formalism of Faddeev and Jackiw. • The path integral formulation of the two-dimensional Snyder harmonic oscillator is outlined.

  5. Path indexing for term retrieval

    OpenAIRE

    1992-01-01

    Different methods for term retrieval in deduction systems have been introduced in literature. This report eviews the three indexing techniques discrimination indexing, path indexing, and abstraction tree indexing. A formal approach to path indexing is presented and algorithms as well as data structures of an existing implementation are discussed. Eventually, experiments will show that our implementation outperforms the implementation of path indexing in the OTTER theorem prover.

  6. Development of biostereometric experiments. [stereometric camera system

    Science.gov (United States)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  7. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  8. Measuring SO2 ship emissions with an ultra-violet imaging camera

    Science.gov (United States)

    Prata, A. J.

    2013-11-01

    Over the last few years fast-sampling ultra-violet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical fluxes ~1-10 kg s-1) and natural sources (e.g. volcanoes; typical fluxes ~10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and fluxes. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and fluxes of SO2 (typical fluxes ~0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the fluxes and path concentrations can be retrieved in real-time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and fluxes determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (>10 Hz) from a single camera. Typical accuracies ranged from 10-30% in path concentration and 10-40% in flux estimation. Despite the ease of use and ability to determine SO2 fluxes from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes.

  9. An introduction to critical paths.

    Science.gov (United States)

    Coffey, Richard J; Richards, Janet S; Remmert, Carl S; LeRoy, Sarah S; Schoville, Rhonda R; Baldwin, Phyllis J

    2005-01-01

    A critical path defines the optimal sequencing and timing of interventions by physicians, nurses, and other staff for a particular diagnosis or procedure. Critical paths are developed through collaborative efforts of physicians, nurses, pharmacists, and others to improve the quality and value of patient care. They are designed to minimize delays and resource utilization and to maximize quality of care. Critical paths have been shown to reduce variation in the care provided, facilitate expected outcomes, reduce delays, reduce length of stay, and improve cost-effectiveness. The approach and goals of critical paths are consistent with those of total quality management (TQM) and can be an important part of an organization's TQM process.

  10. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  11. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  12. Intelligent thermal imaging camera with network interface

    Science.gov (United States)

    Sielewicz, Krzysztof M.; Kasprowicz, Grzegorz; Poźniak, Krzysztof T.; Romaniuk, R. S.

    2011-10-01

    In recent years, a significant increase in usage of thermal imagining cameras can be observed in both public and commercial sector, due to the lower cost and expanding availability of uncooled microbolometer infrared radiation detectors. Devices present on the market vary in their parameters and output interfaces. However, all these thermographic cameras are only a source of an image, which is then analyzed in external image processing unit. There is no possibility to run users dedicated image processing algorithms by thermal imaging camera itself. This paper presents a concept of realization, architecture and hardware implementation of "Intelligent thermal imaging camera with network interface" utilizing modern technologies, standards and approach in one single device.

  13. Omnidirectional underwater camera design and calibration.

    Science.gov (United States)

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-03-12

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  14. Framework for Evaluating Camera Opinions

    Directory of Open Access Journals (Sweden)

    K.M. Subramanian

    2015-03-01

    Full Text Available Opinion mining plays a most important role in text mining applications in brand and product positioning, customer relationship management, consumer attitude detection and market research. The applications lead to new generation of companies/products meant for online market perception, online content monitoring and reputation management. Expansion of the web inspires users to contribute/express opinions via blogs, videos and social networking sites. Such platforms provide valuable information for analysis of sentiment pertaining a product or service. This study investigates the performance of various feature extraction methods and classification algorithm for opinion mining. Opinions expressed in Amazon website for cameras are collected and used for evaluation. Features are extracted from the opinions using Term Document Frequency and Inverse Document Frequency (TDFIDF. Feature transformation is achieved through Principal Component Analysis (PCA and kernel PCA. Naïve Bayes, K Nearest Neighbor and Classification and Regression Trees (CART classification algorithms classify the features extracted.

  15. Gesture recognition on smart cameras

    Science.gov (United States)

    Dziri, Aziz; Chevobbe, Stephane; Darouich, Mehdi

    2013-02-01

    Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.

  16. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  17. Illumination box and camera system

    Science.gov (United States)

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  18. LROC - Lunar Reconnaissance Orbiter Camera

    Science.gov (United States)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on

  19. HRSC: High resolution stereo camera

    Science.gov (United States)

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  20. Validation of a 2D multispectral camera: application to dermatology/cosmetology on a population covering five skin phototypes

    Science.gov (United States)

    Jolivot, Romuald; Nugroho, Hermawan; Vabres, Pierre; Ahmad Fadzil, M. H.; Marzani, Franck

    2011-07-01

    This paper presents the validation of a new multispectral camera specifically developed for dermatological application based on healthy participants from five different Skin PhotoTypes (SPT). The multispectral system provides images of the skin reflectance at different spectral bands, coupled with a neural network-based algorithm that reconstructs a hyperspectral cube of cutaneous data from a multispectral image. The flexibility of neural network based algorithm allows reconstruction at different wave ranges. The hyperspectral cube provides both high spectral and spatial information. The study population involves 150 healthy participants. The participants are classified based on their skin phototype according to the Fitzpatrick Scale and population covers five of the six types. The acquisition of a participant is performed at three body locations: two skin areas exposed to the sun (hand, face) and one area non exposed to the sun (lower back) and each is reconstructed at 3 different wave ranges. The validation is performed by comparing data acquired from a commercial spectrophotometer with the reconstructed spectrum obtained from averaging the hyperspectral cube. The comparison is calculated between 430 to 740 nm due to the limit of the spectrophotometer used. The results reveal that the multispectral camera is able to reconstruct hyperspectral cube with a goodness of fit coefficient superior to 0,997 for the average of all SPT for each location. The study reveals that the multispectral camera provides accurate reconstruction of hyperspectral cube which can be used for analysis of skin reflectance spectrum.

  1. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing, and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera’s performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  2. Stereoscopic reconstruction of 3D PIV data in T-junction with circular profile

    Directory of Open Access Journals (Sweden)

    Jašíková D.

    2013-04-01

    Full Text Available In this paper experimental study of flow in T-junction using 3D PIV method is presented. Motion of seeding particles was recorded by a pair of suitably located cameras in precisely defined cross sections of the junction. Based on this information, three-dimensional model of flow in different sections of junction was reconstructed. The reconstruction results from the projection matrixes of each camera, which are obtained from positions of objects in the scene and their projection positions in the image plane. Standard 3D PIV reconstruction was rejected, because of optical distortion in T-Junction.

  3. Compact multispectral continuous zoom camera for color and SWIR vision with integrated laser range finder

    Science.gov (United States)

    Hübner, M.; Gerken, M.; Achtner, Bertram; Kraus, M.; Münzberg, M.

    2014-06-01

    In an electro-optical sensor suite for long range surveillance tasks the optics for the visible (450nm - 700nm) and the SWIR spectral wavelength range (900nm - 1700 nm) are combined with the receiver optics of an integrated laser range finder (LRF) .The incoming signal from the observed scene and the returned laser pulse are collected within the common entrance aperture of the optics. The common front part of the optics is a broadband corrected lens design from 450 - 1700nm wavelength range. The visible spectrum is split up by a dichroic beam splitter and focused on a HDTV CMOS camera. The returned laser pulse is spatially separated from the scene signal by a special prism and focused on the laser receiver diode of the integrated LRF. The achromatic lens design has a zoom factor 14 and F#2.6 in the visible path. In the SWIR path the F-number is adapted to the corresponding chip dimensions . The alignment of the LRF with respect to the SWIR camera line of sight can be controlled by adjustable integrated wedges. The two images in the visible and the SWIR spectral range match in focus and field of view (FOV) over the full zoom range between 2° and 22° HFOV. The SWIR camera has a resolution of 640×512 pixels. The HDTV camera provides a resolution of 1920×1080. The design and the performance parameters of the multispectral sensor suite is discussed.

  4. MISR FIRSTLOOK radiometric camera-by-camera Cloud Mask V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the FIRSTLOOK Radiometric camera-by-camera Cloud Mask (RCCM) dataset produced using ancillary inputs (RCCT) from the previous time period. It is...

  5. Hanford Environmental Dose Reconstruction Project monthly report

    Energy Technology Data Exchange (ETDEWEB)

    Finch, S.M. (comp.)

    1991-10-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doeses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  6. Hanford Environmental Dose Reconstruction Project. Monthly report

    Energy Technology Data Exchange (ETDEWEB)

    Finch, S.M.; McMakin, A.H. [comps.

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  7. Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse

    Institute of Scientific and Technical Information of China (English)

    Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou

    2016-01-01

    In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.

  8. Trajectory association across multiple airborne cameras.

    Science.gov (United States)

    Sheikh, Yaser Ajmal; Shah, Mubarak

    2008-02-01

    A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.

  9. 民国时期乡村建设运动渊源与路径略考--基于定县实验研究%Brief Examination of the Origin and Path on Rural Reconstruction Movement During the Period of the Republic of China---Based on the Experimental Study of Dingxian County

    Institute of Scientific and Technical Information of China (English)

    秦海根

    2014-01-01

    民国时期的乡村建设运动,是一场旨在对时下农村进行全面改造和建设的社会运动。其兴起的背景,源于当时内忧外患的环境所造成的各种社会矛盾、冲突和困难,以及当时的乡村在社会结构中所具有的特殊重要性。在各种乡村建设活动中,较为典型的是晏阳初于河北定县组织的乡村建设实验。其大致情况是:前期以平教会为主体,在定县研究区开展以“四大教育”为主要内容的先行研究实验活动;至会院合作后,“四大教育”活动推行至全县,并逐步演变为以“六大建设”和县政改革为主要内容的规模建设、改造活动。对这些建设路径、内容、方法进行考察和研究分析,对于当下如何对待和处理“三农”问题,推动城镇化以拉动我国经济的新一轮增长,都不乏深刻的借鉴意义。%During the period of the Republic of China, The Rural Reconstruction Movement is a social movement to comprehensively reform and construe the rural. The background of its emergence is the so-cial contradictions, conflicts and difficulties caused by the environmental problems we face at home and abroad, as well as the time of the village is of special importance in the social structure. Among various kinds of rural construction activities, more typical is rural construction experiment organized by Yan Yangchu in Dingxian country. It is roughly: the China Education Association for Civilian is the prior main body, carried out the " four big education" as the main content of the leading research activities in the study area; Until the China Education Association for Civilian is combined with the Research Institute of County Government, " the four education" activities come to the county, and gradually evolved into the transformation activities and the scale of construction whose main content are the " six big building" and county government reform. All of those have a profound

  10. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many ca

  11. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  12. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial relationsh

  13. Chromatic roots and hamiltonian paths

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2000-01-01

    We present a new connection between colorings and hamiltonian paths: If the chromatic polynomial of a graph has a noninteger root less than or equal to t(n) = 2/3 + 1/3 (3)root (26 + 6 root (33)) + 1/3 (3)root (26 - 6 root (33)) = 1.29559.... then the graph has no hamiltonian path. This result...

  14. Two Generations of Path Dependence

    DEFF Research Database (Denmark)

    Madsen, Mogens Ove

      Even if there is no fully articulated and generally accepted theory of Path Dependence it has eagerly been taken up across a wide range of social sciences - primarily coming from economics. Path Dependence is most of all a metaphor that offers reason to believe, that some political, social...

  15. Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition

    CERN Document Server

    Chaudhury, Ayan; Manna, Sumita; Mukherjee, Subhadeep; Chakrabarti, Amlan

    2010-01-01

    Calibration in a multi camera network has widely been studied for over several years starting from the earlier days of photogrammetry. Many authors have presented several calibration algorithms with their relative advantages and disadvantages. In a stereovision system, multiple view reconstruction is a challenging task. However, the total computational procedure in detail has not been presented before. Here in this work, we are dealing with the problem that, when a world coordinate point is fixed in space, image coordinates of that 3D point vary for different camera positions and orientations. In computer vision aspect, this situation is undesirable. That is, the system has to be designed in such a way that image coordinate of the world coordinate point will be fixed irrespective of the position & orientation of the cameras. We have done it in an elegant fashion. Firstly, camera parameters are calculated in its local coordinate system. Then, we use global coordinate data to transfer all local coordinate d...

  16. Sustainable Energy Path

    Directory of Open Access Journals (Sweden)

    Hiromi Yamamoto

    2005-12-01

    Full Text Available The uses of fossil fuels cause not only the resources exhaustion but also the environmental problems such as global warming. The purposes of this study are to evaluate paths toward sustainable energy systems and roles of each renewable. In order to realize the purposes, the authors developed the global land use and energy model that figured the global energy supply systems in the future considering the cost minimization. Using the model, the authors conducted a simulation in C30R scenario, which is a kind of strict CO2 emission limit scenarios and reduced CO2 emissions by 30% compared with Kyoto protocol forever scenario, and obtained the following results. In C30R scenario bioenergy will supply 33% of all the primary energy consumption. However, wind and photovoltaic will supply 1.8% and 1.4% of all the primary energy consumption, respectively, because of the limits of power grid stability. The results imply that the strict limits of CO2 emissions are not sufficient to achieve the complete renewable energy systems. In order to use wind and photovoltaic as major energy resources, we need not only to reduce the plant costs but also to develop unconventional renewable technologies.

  17. Single Image Camera Calibration in Close Range Photogrammetry for Solder Joint Analysis

    Science.gov (United States)

    Heinemann, D.; Knabner, S.; Baumgarten, D.

    2016-06-01

    Printed Circuit Boards (PCB) play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  18. A BASIC CAMERA UNIT FOR MEDICAL PHOTOGRAPHY.

    Science.gov (United States)

    SMIALOWSKI, A; CURRIE, D J

    1964-08-22

    A camera unit suitable for most medical photographic purposes is described. The unit comprises a single-lens reflex camera, an electronic flash unit and supplementary lenses. Simple instructions for use of th's basic unit are presented. The unit is entirely suitable for taking fine-quality photographs of most medical subjects by persons who have had little photographic training.

  19. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  20. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  1. Solid State Replacement of Rotating Mirror Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  2. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  3. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for sever

  4. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  5. New camera tube improves ultrasonic inspection system

    Science.gov (United States)

    Berger, H.; Collis, W. J.; Jacobs, J. E.

    1968-01-01

    Electron multiplier, incorporated into the camera tube of an ultrasonic imaging system, improves resolution, effectively shields low level circuits, and provides a high level signal input to the television camera. It is effective for inspection of metallic materials for bonds, voids, and homogeneity.

  6. Thermal Cameras in School Laboratory Activities

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal cameras offer real-time visual access to otherwise invisible thermal phenomena, which are conceptually demanding for learners during traditional teaching. We present three studies of students' conduction of laboratory activities that employ thermal cameras to teach challenging thermal concepts in grades 4, 7 and 10-12. Visualization of…

  7. Optimal Camera Placement for Motion Capture Systems.

    Science.gov (United States)

    Rahimian, Pooya; Kearney, Joseph K

    2017-03-01

    Optical motion capture is based on estimating the three-dimensional positions of markers by triangulation from multiple cameras. Successful performance depends on points being visible from at least two cameras and on the accuracy of the triangulation. Triangulation accuracy is strongly related to the positions and orientations of the cameras. Thus, the configuration of the camera network has a critical impact on performance. A poor camera configuration may result in a low quality three-dimensional (3D) estimation and consequently low quality of tracking. This paper introduces and compares two methods for camera placement. The first method is based on a metric that computes target point visibility in the presence of dynamic occlusion from cameras with "good" views. The second method is based on the distribution of views of target points. Efficient algorithms, based on simulated annealing, are introduced for estimating the optimal configuration of cameras for the two metrics and a given distribution of target points. The accuracy and robustness of the algorithms are evaluated through both simulation and empirical measurement. Implementations of the two methods are available for download as tools for the community.

  8. AIM: Ames Imaging Module Spacecraft Camera

    Science.gov (United States)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  9. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…

  10. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  11. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material. Originally images were…

  12. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  13. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  14. Laser Dazzling of Focal Plane Array Cameras

    NARCIS (Netherlands)

    Schleijpen, H.M.A.; Dimmeler, A.; Eberle, B; Heuvel, J.C. van den; Mieremet, A.L.; Bekman, H.H.P.T.; Mellier, B.

    2007-01-01

    Laser countermeasures against infrared focal plane array cameras aim to saturate the full camera image. In this paper we will discuss the results of dazzling experiments performed with MWIR lasers. In the “low energy” pulse regime we observe an increasing saturated area with increasing power. The si

  15. Helical path separation for guided wave tomography

    Energy Technology Data Exchange (ETDEWEB)

    Huthwaite, P.; Seher, M. [Department of Mechanical Engineering, Imperial College, London, SW7 2AZ (United Kingdom)

    2015-03-31

    The pipe wall loss caused by corrosion can be quantified across an area by transmitting guided Lamb waves through the region and measuring the resulting signals. Typically the dispersive relationship for these waves, resulting in the wave velocity being a function of thickness, is exploited which enables the wall thickness to be determined from a velocity reconstruction. The accuracy and quality of this reconstruction is commonly limited by the angle of view available from the transducer arrays. These arrays are often attached as a pair of ring arrays either side of the inspected region, and due to the cyclic nature of the pipe, waves are able to travel in an inifinite number of helical paths between any two transducers. The first arrivals can be separated relatively easily by time gating, but by using just these components the angle of view is strongly restricted. To improve the viewing angle, it is necessary to separate the wavepackets. This paper provides an outline of a separation approach: initially the waves are backpropagated to their source to align the different signals, then a filtering technique is applied to select the desired components. The technique is applied to experimental data and demonstrated to robustly separate the signals.

  16. 3D Beam Reconstruction by Fluorescence Imaging

    CERN Document Server

    Radwell, Neal; Franke-Arnold, Sonja

    2013-01-01

    We present a technique for mapping the complete 3D spatial intensity profile of a laser beam from its fluorescence in an atomic vapour. We propagate shaped light through a rubidium vapour cell and record the resonant scattering from the side. From a single measurement we obtain a camera limited resolution of 200 x 200 transverse points and 659 longitudinal points. In constrast to invasive methods in which the camera is placed in the beam path, our method is capable of measuring patterns formed by counterpropagating laser beams. It has high resolution in all 3 dimensions, is fast and can be completely automated. The technique has applications in areas which require complex beam shapes, such as optical tweezers, atom trapping and pattern formation.

  17. Calibration of the Multi-camera Registration System for Visual Navigation Benchmarking

    Directory of Open Access Journals (Sweden)

    Adam Schmidt

    2014-06-01

    Full Text Available This paper presents the complete calibration procedure of a multi-camera system for mobile robot motion registration. Optimization-based, purely visual methods for the estimation of the relative poses of the motion registration system cameras, as well as the relative poses of the cameras and markers placed on the mobile robot were proposed. The introduced methods were applied to the calibration of the system and the quality of the obtained results was evaluated. The obtained results compare favourably with the state of the art solutions, allowing the use of the considered motion registration system for the accurate reconstruction of the mobile robot trajectory and to register new datasets suitable for the benchmarking of indoor, visual-based navigation algorithms.

  18. AXUV bolometer and Lyman-α camera systems on the TCV tokamak

    Science.gov (United States)

    Degeling, A. W.; Weisen, H.; Zabolotsky, A.; Duval, B. P.; Pitts, R. A.; Wischmeier, M.; Lavanchy, P.; Marmillod, Ph.; Pochon, G.

    2004-10-01

    A set of seven twin slit cameras, each containing two 20-element linear absolute extreme ultraviolet photodiode arrays, has been installed on the Tokamak à Configuration Variable. One array in each camera will operate as a bolometer and the second as a Lyman-alpha (Lα) emission monitor for estimating the recycled neutral flux. The camera configuration was optimized by simulations of tomographic reconstructions of the expected Lα emission. The diagnostic will provide spatial and temporal resolution (10 μs) of the radiated power and the Lα emission that is considerably higher than previously achieved. This optimism is justified by extensive experience with prototype systems, which include first measurements of Lα light from the divertor.

  19. Perspective Intensity Images for Co-Registration of Terrestrial Laser Scanner and Digital Camera

    Science.gov (United States)

    Liang, Yubin; Qiu, Yan; Cui, Tiejun

    2016-06-01

    Co-registration of terrestrial laser scanner and digital camera has been an important topic of research, since reconstruction of visually appealing and measurable models of the scanned objects can be achieved by using both point clouds and digital images. This paper presents an approach for co-registration of terrestrial laser scanner and digital camera. A perspective intensity image of the point cloud is firstly generated by using the collinearity equation. Then corner points are extracted from the generated perspective intensity image and the camera image. The fundamental matrix F is then estimated using several interactively selected tie points and used to obtain more matches with RANSAC. The 3D coordinates of all the matched tie points are directly obtained or estimated using the least squares method. The robustness and effectiveness of the presented methodology is demonstrated by the experimental results. Methods presented in this work may also be used for automatic registration of terrestrial laser scanning point clouds.

  20. Hard paths, soft paths or no paths? Cross-cultural perceptions of water solutions

    Science.gov (United States)

    Wutich, A.; White, A. C.; White, D. D.; Larson, K. L.; Brewis, A.; Roberts, C.

    2014-01-01

    In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences associated with development status and, to a lesser extent, water scarcity. People in the two less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in the more developed sites. Thematically, people in the two less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community-based solutions, while people in the more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in the two water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in the water-rich sites. Thematically, people in the two water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in the water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

  1. Identification of hadronic {tau} decays using the {tau} lepton flight path and reconstruction and identification of jets with a low transverse energy at intermediate luminosities with an application to the search for the Higgs boson in vector boson fusion with the ATLAS experiment at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Ruwiedel, Christoph

    2010-06-15

    Three studies of different components of the object reconstruction with the ATLAS experiment using simulated data are presented. In each study, a method for the improvement of the reconstruction is developed and the improvements are evaluated. The reconstruction of observables sensitive to the {tau} lepton lifetime and their use for the identification of hadronic {tau} decays are the subject of the first study. In the second study, a method for the identification of jets from the signal interaction in the presence of additional minimum bias interactions is developed and applied to the central jet veto in the vector boson fusion Higgs analysis. In the third study, the effects of additional minimum bias interactions close in time to the triggered event on the cluster formation and the determination of the jet energy are evaluated and the cluster formation is modified in a way such that the bias of the reconstructed jet energy is minimized. (orig.)

  2. On the absolute calibration of SO2 cameras

    Science.gov (United States)

    Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich

    2013-01-01

    Sulphur dioxide emission rate measurements are an important tool for volcanic monitoring and eruption risk assessment. The SO2 camera technique remotely measures volcanic emissions by analysing the ultraviolet absorption of SO2 in a narrow spectral window between 300 and 320 nm using solar radiation scattered in the atmosphere. The SO2 absorption is selectively detected by mounting band-pass interference filters in front of a two-dimensional, UV-sensitive CCD detector. One important step for correct SO2 emission rate measurements that can be compared with other measurement techniques is a correct calibration. This requires conversion from the measured optical density to the desired SO2 column density (CD). The conversion factor is most commonly determined by inserting quartz cells (cuvettes) with known amounts of SO2 into the light path. Another calibration method uses an additional narrow field-of-view Differential Optical Absorption Spectroscopy system (NFOVDOAS), which measures the column density simultaneously in a small area of the camera’s field-of-view. This procedure combines the very good spatial and temporal resolution of the SO2 camera technique with the more accurate column densities obtainable from DOAS measurements.

  3. Camera Inspection Arm for Boiling Water Reactors - 13330

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Scott; Rood, Marc [S.A. Technology, 3985 S. Lincoln Ave, Loveland, CO 80537 (United States)

    2013-07-01

    Boiling Water Reactor (BWR) outage maintenance tasks can be time-consuming and hazardous. Reactor facilities are continuously looking for quicker, safer, and more effective methods of performing routine inspection during these outages. In 2011, S.A. Technology (SAT) was approached by Energy Northwest to provide a remote system capable of increasing efficiencies related to Reactor Pressure Vessel (RPV) internal inspection activities. The specific intent of the system discussed was to inspect recirculation jet pumps in a manner that did not require manual tooling, and could be performed independently of other ongoing inspection activities. In 2012, SAT developed a compact, remote, camera inspection arm to create a safer, more efficient outage environment. This arm incorporates a compact and lightweight design along with the innovative use of bi-stable composite tubes to provide a six-degree of freedom inspection tool capable of reducing dose uptake, reducing crew size, and reducing the overall critical path for jet pump inspections. The prototype camera inspection arm unit is scheduled for final testing in early 2013 in preparation for the Columbia Generating Station refueling outage in the spring of 2013. (authors)

  4. STREAK CAMERA MEASUREMENTS OF THE APS PC GUN DRIVE LASER

    Energy Technology Data Exchange (ETDEWEB)

    Dooling, J. C.; Lumpkin, A. H.

    2017-06-25

    We report recent pulse-duration measurements of the APS PC Gun drive laser at both second harmonic and fourth harmonic wavelengths. The drive laser is a Nd:Glass-based chirped pulsed amplifier (CPA) operating at an IR wavelength of 1053 nm, twice frequency-doubled to obtain UV output for the gun. A Hamamatsu C5680 streak camera and an M5675 synchroscan unit are used for these measurements; the synchroscan unit is tuned to 119 MHz, the 24th subharmonic of the linac s-band operating frequency. Calibration is accomplished both electronically and optically. Electronic calibration utilizes a programmable delay line in the 119 MHz rf path. The optical delay uses an etalon with known spacing between reflecting surfaces and is coated for the visible, SH wavelength. IR pulse duration is monitored with an autocorrelator. Fitting the streak camera image projected profiles with Gaussians, UV rms pulse durations are found to vary from 2.1 ps to 3.5 ps as the IR varies from 2.2 ps to 5.2 ps.

  5. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

    Science.gov (United States)

    Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir

    2016-06-01

    This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.

  6. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  7. Flow visualization by mobile phone cameras

    Science.gov (United States)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  8. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor...... the viewpoint movements to the player type and her game-play style. Ultimately, the methodology is applied to a 3D platform game and is evaluated through a controlled experiment; the results suggest that the resulting adaptive cinematographic experience is favoured by some player types and it can generate...

  9. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  10. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  11. 3D kinematic measurement of human movement using low cost fish-eye cameras

    Science.gov (United States)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  12. True three-dimensional camera

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2013-01-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by short photo-conducting lightguides at each pixel. In the eye the rods and cones are the fiber-like lightguides. The device uses ambient light that is only coherent in spherical shell-shaped light packets of thickness of one coherence length. Modern semiconductor technology permits the construction of lightguides shorter than a coherence length of ambient light. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel. Light frequency components in the packet arriving at a pixel through a convex lens add constructively only if the light comes from the object point in focus at this pixel. The light in packets from all other object points cancels. Thus the pixel receives light from one object point only. The lightguide has contacts along its length. The lightguide charge carriers are generated by the light patterns. These light patterns, and thus the photocurrent, shift in response to the phase of the input signal. Thus, the photocurrent is a function of the distance from the pixel to its object point. Applications include autonomous vehicle navigation and robotic vision. Another application is a crude teleportation system consisting of a camera and a three-dimensional printer at a remote location.

  13. Cloud Computing with Context Cameras

    CERN Document Server

    Pickles, A J

    2013-01-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every 2 minutes through BVriz filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of 0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-comp...

  14. NIR Camera/spectrograph: TEQUILA

    Science.gov (United States)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  15. Optimal UAV Path Planning for Tracking a Moving Ground Vehicle with a Gimbaled Camera

    Science.gov (United States)

    2014-03-27

    105 vii List of Figures Figure Page 2.1. Sig Rascal 110 . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2. Arduino ...necessary baseline for flight testing autonomous UAV convoy tracking. Figure 2.2: Arduino Autopilot - APM 2.5 [1] The APM is an open source autopilot...a C-based programming language and comes with a cor- 21 responding programming environment called Arduino IDE [1]. Within the Arduino IDE, all of the

  16. Optical design of the comet Shoemaker-Levy speckle camera

    Energy Technology Data Exchange (ETDEWEB)

    Bissinger, H. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    An optical design is presented in which the Lick 3 meter telescope and a bare CCD speckle camera system was used to image the collision sites of the Shoemaker-Levy 9 comet with the Planet Jupiter. The brief overview includes of the optical constraints and system layout. The choice of a Risley prism combination to compensate for the time dependent atmospheric chromatic changes are described. Plate scale and signal-to-noise ratio curves resulting from imaging reference stars are compared with theory. Comparisons between un-corrected and reconstructed images of Jupiter`s impact sites. The results confirm that speckle imaging techniques can be used over an extended time period to provide a method to image large extended objects.

  17. Metric for evaluation of filter efficiency in spectral cameras.

    Science.gov (United States)

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2016-11-10

    Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.

  18. Prototype of a single probe Compton camera for laparoscopic surgery

    Science.gov (United States)

    Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.

    2017-02-01

    Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.

  19. Light field camera self-calibration and registration

    Science.gov (United States)

    Ji, Zhe; Zhang, Chunping; Wang, Qing

    2016-10-01

    The multi-view light fields (MVLF) provide new solutions to the existing problems in monocular light field, such as the limited field of view. However as key steps in MVLF, the calibration and registration have been limited studied. In this paper, we propose a method to calibrate the camera and register different LFs without the checkboard at the same time, which we call the self-calibrating method. We model the LF structure as a 5-parameter two-parallel-plane (2PP) model, then represent the associations between rays and reconstructed points as a 3D projective transformation. With the constraints of ray-ray correspondences in different LFs, the parameters can be solved with a linear initialization and a nonlinear refinement. The result in real scene and 3D point clouds registration error of MVLF in simulated data verify the high performance of the proposed model.

  20. 14 CFR 23.57 - Takeoff path.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Takeoff path. 23.57 Section 23.57... path. For each commuter category airplane, the takeoff path is as follows: (a) The takeoff path extends... completed; and (1) The takeoff path must be based on the procedures prescribed in § 23.45; (2) The...

  1. Computational Techniques in Radio Neutrino Event Reconstruction

    Science.gov (United States)

    Beydler, M.; ARA Collaboration

    2016-03-01

    The Askaryan Radio Array (ARA) is a high-energy cosmic neutrino detector constructed with stations of radio antennas buried in the ice at the South Pole. Event reconstruction relies on the analysis of the arrival times of the transient radio signals generated by neutrinos interacting within a few kilometers of the detector. Because of its depth dependence, the index of refraction in the ice complicates the interferometric directional reconstruction of possible neutrino events. Currently, there is an ongoing endeavor to enhance the programs used for the time-consuming computations of the curved paths of the transient wave signals in the ice as well as the interferometric beamforming. We have implemented a fast, multi-dimensional spline table lookup of the wave arrival times in order to enable raytrace-based directional reconstructions. Additionally, we have applied parallel computing across multiple Graphics Processing Units (GPUs) in order to perform the beamforming calculations quickly.

  2. Techniques in Iterative Proton CT Image Reconstruction

    CERN Document Server

    Penfold, Scott

    2015-01-01

    This is a review paper on some of the physics, modeling, and iterative algorithms in proton computed tomography (pCT) image reconstruction. The primary challenge in pCT image reconstruction lies in the degraded spatial resolution resulting from multiple Coulomb scattering within the imaged object. Analytical models such as the most likely path (MLP) have been proposed to predict the scattered trajectory from measurements of individual proton location and direction before and after the object. Iterative algorithms provide a flexible tool with which to incorporate these models into image reconstruction. The modeling leads to a large and sparse linear system of equations that can efficiently be solved by projection methods-based iterative algorithms. Such algorithms perform projections of the iterates onto the hyperlanes that are represented by the linear equations of the system. They perform these projections in possibly various algorithmic structures, such as block-iterative projections (BIP), string-averaging...

  3. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis

    Directory of Open Access Journals (Sweden)

    Affan Shaukat

    2016-11-01

    Full Text Available In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR. LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation.

  4. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis.

    Science.gov (United States)

    Shaukat, Affan; Blacker, Peter C; Spiteri, Conrad; Gao, Yang

    2016-11-20

    In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation.

  5. Gamma-ray imaging with a large micro-TPC and a scintillation camera

    Energy Technology Data Exchange (ETDEWEB)

    Hattori, K. [Graduate School of Science, Department of Physics, Kyoto University Kitashirakawa, Sakyo, Kyoto 606-8502 (Japan)], E-mail: hattori@cr.scphys.kyoto-u.ac.jp; Kabuki, S.; Kubo, H.; Kurosawa, S.; Miuchi, K. [Graduate School of Science, Department of Physics, Kyoto University Kitashirakawa, Sakyo, Kyoto 606-8502 (Japan); Nagayoshi, T. [Advanced Research Institute for Science and Engineering, Waseda University, 17 Kikui-cho, Shinjuku 162-0044, Tokyo (Japan); Nishimura, H.; Okada, Y. [Graduate School of Science, Department of Physics, Kyoto University Kitashirakawa, Sakyo, Kyoto 606-8502 (Japan); Orito, R. [Graduate School of Science and Technology, Department of Physics, Kobe University, 1-1 Rokkoudai, Nada, Kobe 657-8501 (Japan); Sekiya, H.; Takada, A. [Graduate School of Science, Department of Physics, Kyoto University Kitashirakawa, Sakyo, Kyoto 606-8502 (Japan); Takeda, A. [Kamioka Observatory, ICRR, University of Tokyo, 456 Higashi-mozumi, Hida-shi, Gifu 505-1205 (Japan); Tanimori, T.; Ueno, K. [Graduate School of Science, Department of Physics, Kyoto University Kitashirakawa, Sakyo, Kyoto 606-8502 (Japan)

    2007-10-21

    We report on the development of a large Compton camera with the full reconstruction of the Compton process based on a prototype. This camera consists of two kinds of detectors. One is a gaseous time projection chamber (micro-TPC) for measuring the energy and the track of a Compton recoil electron. The micro-TPC is based on a {mu}-PIC and a GEM, which are micro-pattern gas detectors (MPGDs). The size of the micro-TPC was 10cmx10cmx8cm in the case of the prototype, and we enlarged it to 23cmx28cmx15cm. The other detector part is a NaI (Tl) Anger camera for measuring the scattered gamma-ray. With these informations, we can completely reconstruct a Compton event, and determine the direction of the incident gamma-ray, event by event. We succeeded in reconstructing events of incident 662 keV gamma-rays. The measured angular resolutions of the 'angular resolution measure' (ARM) and the 'scatter plane deviation' (SPD) were 9.3{sup 0} and 158{sup 0} (FWHM), respectively.

  6. Geodesic flows on path spaces

    Institute of Scientific and Technical Information of China (English)

    XIANG; Kainan

    2001-01-01

    [1] Cruzeiro, A. B., Malliavin, P., Renormalized differential geometry on path spaces: Structural equation, curvature, J. Funct. Anal., 1996, 139: 119-181.[2] Stroock, D. W., Some thoughts about Riemannian structures on path spaces, preprint, 1996.[3] Driver, B., A Cameron-Martin type quasi-invariance theorem for Brownian motion on a compact manifold, J. Funct. Anal., 1992, 109: 272-376.[4] Enchev, O., Stroock, D. W., Towards a Riemannian geometry on the path space over a Riemannian manifold, J. Funct. Anal., 1995, 134: 392-416.[5] Hsu, E., Quasi-invariance of the Wiener measure on the path space over a compact Riemannian manifold, J. Funct. Anal., 1995, 134: 417-450.[6] Lyons, T. J., Qian, Z. M., A class of vector fields on path space, J.Funct. Anal., 1997, 145: 205-223.[7] Li, X. D., Existence and uniqueness of geodesics on path spaces, J. Funct. Anal., to be published.[8] Driver, B., Towards calculus and geometry on path spaces, in Proc. Symp. Pure and Appl. Math. 57 (ed. Cranston, M., Pinsky, M.), Cornell: AMS, 1993, 1995.

  7. (Almost) Featureless Stereo: Calibration and Dense 3D Reconstruction Using Whole Image Operations

    Science.gov (United States)

    Smelyanskiy, V. N.; Morris, R. D.; Maluf, D. A.; Cheeseman, P.

    2001-01-01

    The conventional approach to shape from stereo is via feature extraction and correspondences. This results in estimates of the camera parameters and a typically spare estimate of the surface. Given a set of calibrated images, a dense surface reconstruction is possible by minimizing the error between the observed image and the image rendered from the estimated surface with respect to the surface model parameters. Given an uncalibrated image and an estimated surface, the camera parameters can be estimated by minimizing the error between the observed and rendered images a function of the camera parameters. We use a very small dense set of matched features to provide camera parameter estimates for the initial dense surface estimate. We then re-estimate the camera parameters as described above, and then re-estimate the surface. This process is iterated. Whilst it can not be proven to converge, we have found that around three iterations results in excellent surface and camera parameters estimates.

  8. Correction of spatially varying image and video motion blur using a hybrid camera.

    Science.gov (United States)

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  9. Iterative reconstruction of volumetric particle distribution

    Science.gov (United States)

    Wieneke, Bernhard

    2013-02-01

    For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.

  10. Time synchronization of consumer cameras on Micro Aerial Vehicles

    Science.gov (United States)

    Rehak, M.; Skaloud, J.

    2017-01-01

    This article discusses the problem of time registration between navigation and imaging components on Micro Aerial Vehicles (MAVs). Accurate mapping with MAVs is gaining importance in applications such as corridor mapping, road and pipeline inspections or mapping of large areas with homogeneous surface structure, e.g. forests or agricultural fields. Therefore, accurate aerial control plays a major role in efficient reconstruction of the terrain and artifact-free ortophoto generation. A key prerequisite is correct time stamping of images in global time frame as the sensor exterior orientation changes rapidly and its determination by navigation sensors influence the mapping accuracy on the ground. A majority of MAVs is equipped with consumer-grade, non-metric cameras for which the precise time registration with navigation components is not trivial to realize and its performance not easy to assess. In this paper, we study the problematic of synchronization by implementing and evaluating spatio-temporal observation models of aerial control to estimate residual delay of the imaging sensor. Such modeling is possible through inclusion of additional velocity and angular rate observations into the adjustment. This moves the optimization problem from 3D to 4D. The benefit of this approach is verified on real mapping projects using a custom build MAV and an off-the-shelf camera.

  11. Development of a stereo camera system for road surface assessment

    Science.gov (United States)

    Su, D.; Nagayama, T.; Irie, M.; Fujino, Y.

    2013-04-01

    In Japan, large number of road structures which were built in the period of high economic growth, has been deteriorated due to heavy traffic and severe conditions, especially in the metropolitan area. In particular, the poor condition of expansion joints of the bridge caused by the frequent impact from the passing vehicles has significantly influence the vehicle safety. In recent year, stereo vision is a widely researched and implemented monitoring approach in object recognition field. This paper introduces the development of a stereo camera system for road surface assessment. In this study, first the static photos taken by a calibrated stereo camera system are utilized to reconstruct the three-dimensional coordinates of targets in the pavement. Subsequently to align the various coordinates obtained from different view meshes, one modified Iterative Closet Point method is proposed by affording the appropriate initial conditions and image correlation method. Several field tests have been carried out to evaluate the capabilities of this system. After succeeding to align all the measured coordinates, this system can offer not only the accurate information of local deficiency such as the patching, crack or pothole, but also global fluctuation in a long distance range of the road surface.

  12. Photon-efficient imaging with a single-photon camera

    Science.gov (United States)

    Shin, Dongeek; Xu, Feihu; Venkatraman, Dheera; Lussana, Rudi; Villa, Federica; Zappa, Franco; Goyal, Vivek K.; Wong, Franco N. C.; Shapiro, Jeffrey H.

    2016-06-01

    Reconstructing a scene's 3D structure and reflectivity accurately with an active imaging system operating in low-light-level conditions has wide-ranging applications, spanning biological imaging to remote sensing. Here we propose and experimentally demonstrate a depth and reflectivity imaging system with a single-photon camera that generates high-quality images from ~1 detected signal photon per pixel. Previous achievements of similar photon efficiency have been with conventional raster-scanning data collection using single-pixel photon counters capable of ~10-ps time tagging. In contrast, our camera's detector array requires highly parallelized time-to-digital conversions with photon time-tagging accuracy limited to ~ns. Thus, we develop an array-specific algorithm that converts coarsely time-binned photon detections to highly accurate scene depth and reflectivity by exploiting both the transverse smoothness and longitudinal sparsity of natural scenes. By overcoming the coarse time resolution of the array, our framework uniquely achieves high photon efficiency in a relatively short acquisition time.

  13. Simulating the functionality of a digital camera pipeline

    Science.gov (United States)

    Toadere, Florin

    2013-10-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal, color processing, and rendering. A spectral image processing algorithm is used to simulate the radiometric properties of a digital camera. In the algorithm, we take into consideration the spectral image and the transmittances of the light source, lenses, filters, and the quantum efficiency of a complementary metal-oxide semiconductor (CMOS) image sensor. The optical part is characterized by a multiple convolution between the different point spread functions optical components such as the Cooke triplet, the aperture, the light fall off, and the optical part of the CMOS sensor. The electrical part consists of the Bayer sampling, interpolation, dynamic range, and analog to digital conversion. The reconstruction of the noisy blurred image is performed by blending different light exposed images in order to reduce the noise. Then, the image is filtered, deconvoluted, and sharpened to eliminate the noise and blur. Next, we have the color processing and rendering blocks interpolation, white balancing, color correction, conversion from XYZ color space to LAB color space, and, then, into the RGB color space, the color saturation and contrast.

  14. Path integrals and quantum processes

    CERN Document Server

    Swanson, Marc S

    1992-01-01

    In a clearly written and systematic presentation, Path Integrals and Quantum Processes covers all concepts necessary to understand the path integral approach to calculating transition elements, partition functions, and source functionals. The book, which assumes only a familiarity with quantum mechanics, is ideal for use as a supplemental textbook in quantum mechanics and quantum field theory courses. Graduate and post-graduate students who are unfamiliar with the path integral will also benefit from this contemporary text. Exercise sets are interspersed throughout the text to facilitate self-

  15. Path Integration in Conical Space

    OpenAIRE

    Inomata, Akira; Junker, Georg

    2011-01-01

    Quantum mechanics in conical space is studied by the path integral method. It is shown that the curvature effect gives rise to an effective potential in the radial path integral. It is further shown that the radial path integral in conical space can be reduced to a form identical with that in flat space when the discrete angular momentum of each partial wave is replaced by a specific non-integral angular momentum. The effective potential is found proportional to the squared mean curvature of ...

  16. On the Reaction Path Hamiltonian

    Institute of Scientific and Technical Information of China (English)

    孙家钟; 李泽生

    1994-01-01

    A vector-fiber bundle structure of the reaction path Hamiltonian, which has been introduced by Miller, Handy and Adams, is explored with respect to molecular vibrations orthogonal to the reaction path. The symmetry of the fiber bundle is characterized by the real orthogonal group O(3N- 7) for the dynamical system with N atoms. Under the action of group O(3N- 7). the kinetic energy of the reaction path Hamiltonian is left invariant. Furthermore , the invariant behaviour of the Hamiltonian vector fields is investigated.

  17. Reconstruction of the electron diffusion region

    Science.gov (United States)

    Sonnerup, B. U. Ö.; Hasegawa, H.; Denton, R. E.; Nakamura, T. K. M.

    2016-05-01

    We discuss mathematical tools for the reconstruction of two-dimensional, time-independent magnetic field and flow in the electron diffusion region, at a site of antiparallel magnetic reconnection. The basic assumptions are that the ions are stationary and have constant density. The width of the reconnection layer is of the order of the electron gyroradius or the electron inertial length. Our model includes the axial electron pressure term in Ohm's law developed by M. Hesse and coworkers. We demonstrate the feasibility of doing reconstruction of electron magnetohydrodynamic (EMHD) structures for a simplified system with zero electron inertia. The code is benchmarked using an exact solution that has antiparallel unidirectional magnetic fields, plus out-of-plane quadrupolar Hall fields, as well as the expected slow electron inflow and rapid exit jets. The inertialess reconstruction is then applied to synthetic data from a 2-D, particle-in-cell, simulation of antiparallel reconnection. We find that the inertialess reconstruction of its electron diffusion region works reasonably well only when the spacecraft path passes close to the center of the reconnection site where the magnetic field is zero and the electron flow has a stagnation point. When the path is located farther away, the effects of electron inertia, and probably also deviations from the Hesse formula, cause the quality of the reconstruction to deteriorate. Electron inertia is included in the theoretical development presented here but requires a more complicated numerical reconstruction code. The development and testing of such a code is underway and will be presented separately.

  18. Near Real-Time Estimation of Super-Resolved Depth and All-In-Focus Images from a Plenoptic Camera Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    J. P. Lüke

    2010-01-01

    Full Text Available Depth range cameras are a promising solution for the 3DTV production chain. The generation of color images with their accompanying depth value simplifies the transmission bandwidth problem in 3DTV and yields a direct input for autostereoscopic displays. Recent developments in plenoptic video-cameras make it possible to introduce 3D cameras that operate similarly to traditional cameras. The use of plenoptic cameras for 3DTV has some benefits with respect to 3D capture systems based on dual stereo cameras since there is no need for geometric and color calibration or frame synchronization. This paper presents a method for simultaneously recovering depth and all-in-focus images from a plenoptic camera in near real time using graphics processing units (GPUs. Previous methods for 3D reconstruction using plenoptic images suffered from the drawback of low spatial resolution. A method that overcomes this deficiency is developed on parallel hardware to obtain near real-time 3D reconstruction with a final spatial resolution of 800×600 pixels. This resolution is suitable as an input to some autostereoscopic displays currently on the market and shows that real-time 3DTV based on plenoptic video-cameras is technologically feasible.

  19. Automatic camera tracking for remote manipulators

    Energy Technology Data Exchange (ETDEWEB)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2/sup 0/ deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables.

  20. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.