WorldWideScience

Sample records for camera path reconstruction

  1. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  2. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yunsu Bok

    2014-11-01

    Full Text Available This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  3. A filtered backprojection reconstruction algorithm for Compton camera

    Energy Technology Data Exchange (ETDEWEB)

    Lojacono, Xavier; Maxim, Voichita; Peyrin, Francoise; Prost, Remy [Lyon Univ., Villeurbanne (France). CNRS, Inserm, INSA-Lyon, CREATIS, UMR5220; Zoglauer, Andreas [California Univ., Berkeley, CA (United States). Space Sciences Lab.

    2011-07-01

    In this paper we present a filtered backprojection reconstruction algorithm for Compton Camera detectors of particles. Compared to iterative methods, widely used for the reconstruction of images from Compton camera data, analytical methods are fast, easy to implement and avoid convergence issues. The method we propose is exact for an idealized Compton camera composed of two parallel plates of infinite dimension. We show that it copes well with low number of detected photons simulated from a realistic device. Images reconstructed from both synthetic data and realistic ones obtained with Monte Carlo simulations demonstrate the efficiency of the algorithm. (orig.)

  4. Fast image reconstruction for Compton camera using stochastic origin ensemble approach.

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2011-01-01

    Compton camera has been proposed as a potential imaging tool in astronomy, industry, homeland security, and medical diagnostics. Due to the inherent geometrical complexity of Compton camera data, image reconstruction of distributed sources can be ineffective and/or time-consuming when using standard techniques such as filtered backprojection or maximum likelihood-expectation maximization (ML-EM). In this article, the authors demonstrate a fast reconstruction of Compton camera data using a novel stochastic origin ensembles (SOE) approach based on Markov chains. During image reconstruction, the origins of the measured events are randomly assigned to locations on conical surfaces, which are the Compton camera analogs of lines-of-responses in PET. Therefore, the image is defined as an ensemble of origin locations of all possible event origins. During the course of reconstruction, the origins of events are stochastically moved and the acceptance of the new event origin is determined by the predefined acceptance probability, which is proportional to the change in event density. For example, if the event density at the new location is higher than in the previous location, the new position is always accepted. After several iterations, the reconstructed distribution of origins converges to a quasistationary state which can be voxelized and displayed. Comparison with the list-mode ML-EM reveals that the postfiltered SOE algorithm has similar performance in terms of image quality while clearly outperforming ML-EM in relation to reconstruction time. In this study, the authors have implemented and tested a new image reconstruction algorithm for the Compton camera based on the stochastic origin ensembles with Markov chains. The algorithm uses list-mode data, is parallelizable, and can be used for any Compton camera geometry. SOE algorithm clearly outperforms list-mode ML-EM for simple Compton camera geometry in terms of reconstruction time. The difference in computational time

  5. Direct cone beam SPECT reconstruction with camera tilt

    International Nuclear Information System (INIS)

    Jianying Li; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.; Zongjian Cao; Tsui, B.M.W.

    1993-01-01

    A filtered backprojection (FBP) algorithm is derived to perform cone beam (CB) single-photon emission computed tomography (SPECT) reconstruction with camera tilt using circular orbits. This algorithm reconstructs the tilted angle CB projection data directly by incorporating the tilt angle into it. When the tilt angle becomes zero, this algorithm reduces to that of Feldkamp. Experimentally acquired phantom studies using both a two-point source and the three-dimensional Hoffman brain phantom have been performed. The transaxial tilted cone beam brain images and profiles obtained using the new algorithm are compared with those without camera tilt. For those slices which have approximately the same distance from the detector in both tilt and non-tilt set-ups, the two transaxial reconstructions have similar profiles. The two-point source images reconstructed from this new algorithm and the tilted cone beam brain images are also compared with those reconstructed from the existing tilted cone beam algorithm. (author)

  6. Preparation and tomographic reconstruction of an arbitrary single-photon path qubit

    International Nuclear Information System (INIS)

    Baek, So-Young; Kim, Yoon-Ho

    2011-01-01

    We report methods for preparation and tomographic reconstruction of an arbitrary single-photon path qubit. The arbitrary single-photon path qubit is prepared losslessly by passing the heralded single-photon state from spontaneous parametric down-conversion through variable beam splitter. Quantum state tomography of the single-photon path qubit is implemented by introducing path-projection measurements based on the first-order single-photon quantum interference. Using the state preparation and path-projection measurements methods for the single-photon path qubit, we demonstrate preparation and complete tomographic reconstruction of the single-photon path qubit with arbitrary purity. -- Highlights: → We report methods for preparation and tomographic reconstruction of an arbitrary single-photon path qubit. → We implement path-projection measurements based on the first-order single-photon quantum interference. → We demonstrate preparation and complete tomographic reconstruction of the single-photon path qubit with arbitrary purity.

  7. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  8. Light Source Estimation with Analytical Path-tracing

    OpenAIRE

    Kasper, Mike; Keivan, Nima; Sibley, Gabe; Heckman, Christoffer

    2017-01-01

    We present a novel algorithm for light source estimation in scenes reconstructed with a RGB-D camera based on an analytically-derived formulation of path-tracing. Our algorithm traces the reconstructed scene with a custom path-tracer and computes the analytical derivatives of the light transport equation from principles in optics. These derivatives are then used to perform gradient descent, minimizing the photometric error between one or more captured reference images and renders of our curre...

  9. Image reconstruction from limited angle Compton camera data

    International Nuclear Information System (INIS)

    Tomitani, T.; Hirasawa, M.

    2002-01-01

    The Compton camera is used for imaging the distributions of γ ray direction in a γ ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of γ rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection. (author)

  10. Filtered backprojection proton CT reconstruction along most likely paths

    Energy Technology Data Exchange (ETDEWEB)

    Rit, Simon; Dedes, George; Freud, Nicolas; Sarrut, David; Letang, Jean Michel [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, 69008 Lyon (France)

    2013-03-15

    Purpose: Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm. Methods: The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms. Results: The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 - 1.6 mm compared to 1.0 - 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200 Degree-Sign scans. Conclusions: Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.

  11. Improved electromagnetic tracking for catheter path reconstruction with application in high-dose-rate brachytherapy.

    Science.gov (United States)

    Lugez, Elodie; Sadjadi, Hossein; Joshi, Chandra P; Akl, Selim G; Fichtinger, Gabor

    2017-04-01

    Electromagnetic (EM) catheter tracking has recently been introduced in order to enable prompt and uncomplicated reconstruction of catheter paths in various clinical interventions. However, EM tracking is prone to measurement errors which can compromise the outcome of the procedure. Minimizing catheter tracking errors is therefore paramount to improve the path reconstruction accuracy. An extended Kalman filter (EKF) was employed to combine the nonlinear kinematic model of an EM sensor inside the catheter, with both its position and orientation measurements. The formulation of the kinematic model was based on the nonholonomic motion constraints of the EM sensor inside the catheter. Experimental verification was carried out in a clinical HDR suite. Ten catheters were inserted with mean curvatures varying from 0 to [Formula: see text] in a phantom. A miniaturized Ascension (Burlington, Vermont, USA) trakSTAR EM sensor (model 55) was threaded within each catheter at various speeds ranging from 7.4 to [Formula: see text]. The nonholonomic EKF was applied on the tracking data in order to statistically improve the EM tracking accuracy. A sample reconstruction error was defined at each point as the Euclidean distance between the estimated EM measurement and its corresponding ground truth. A path reconstruction accuracy was defined as the root mean square of the sample reconstruction errors, while the path reconstruction precision was defined as the standard deviation of these sample reconstruction errors. The impacts of sensor velocity and path curvature on the nonholonomic EKF method were determined. Finally, the nonholonomic EKF catheter path reconstructions were compared with the reconstructions provided by the manufacturer's filters under default settings, namely the AC wide notch and the DC adaptive filter. With a path reconstruction accuracy of 1.9 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (2.4 mm) by 21% and the raw EM

  12. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery

    Directory of Open Access Journals (Sweden)

    Marzi Christian

    2017-09-01

    Full Text Available Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.

  13. Superficial vessel reconstruction with a multiview camera system

    Science.gov (United States)

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  14. Noise in off-axis type holograms including reconstruction and CCD camera parameters

    Energy Technology Data Exchange (ETDEWEB)

    Voelkl, Edgar, E-mail: edgar.voelkl@fei.com [FEI Company, 5350 NE Dawson Creek Drive, Hillsboro, OR 97124-5793 (United States)

    2010-02-15

    Phase and amplitude images as contained in digital holograms are commonly extracted via a process called 'reconstruction'. Expressions for the expected noise in these images have been given in the past by several authors; however, the effect of the actual reconstruction process has not been fully appreciated. By starting with the Quantum Mechanical intensity distribution of the off-axis type interference pattern, then building the digital hologram on an electron-by-electron base while simultaneously reconstructing the phase/amplitude images and evaluating their noise levels, an expression is derived that consistently describes the noise in simulated and experimental phase/amplitude images and contains the reconstruction parameters. Because of the necessity to discretize the intensity distribution function, the digitization effects of an ideal CCD camera had to be included. Subsequently, this allowed a comparison between real and simulated holograms which then led to a comparison between the performance of an 'ideal' CCD camera versus a real device. It was concluded that significant improvement of the phase and amplitude noise may be obtained if CCD cameras were optimized for digitizing intensity distributions at low sampling rates.

  15. Optimization and verification of image reconstruction for a Compton camera towards application as an on-line monitor for particle therapy

    Science.gov (United States)

    Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.

    2017-07-01

    Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.

  16. Image-based path planning for automated virtual colonoscopy navigation

    Science.gov (United States)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  17. A direct-view customer-oriented digital holographic camera

    Science.gov (United States)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  18. Image reconstruction methods for the PBX-M pinhole camera

    International Nuclear Information System (INIS)

    Holland, A.; Powell, E.T.; Fonck, R.J.

    1990-03-01

    This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs

  19. Open-path, closed-path and reconstructed aerosol extinction at a rural site.

    Science.gov (United States)

    Gordon, Timothy D; Prenni, Anthony J; Renfro, James R; McClure, Ethan; Hicks, Bill; Onasch, Timothy B; Freedman, Andrew; McMeeking, Gavin R; Chen, Ping

    2018-04-09

    The Handix Scientific Open-Path Cavity Ringdown Spectrometer (OPCRDS) was deployed during summer 2016 in Great Smoky Mountains National Park (GRSM). Extinction coefficients from the relatively new OPCRDS and from a more well-established extinction instrument agreed to within 7%. Aerosol hygroscopic growth (f(RH)) was calculated from the ratio of ambient extinction measured by the OPCRDS to dry extinction measured by a closed-path extinction monitor (Aerodyne's Cavity Attenuated Phase Shift Particulate Matter Extinction Monitor, CAPS PMex). Derived hygroscopicity (RH 1995 at the same site and time of year, which is noteworthy given the decreasing trend for organics and sulfate in the eastern U.S. However, maximum f(RH) values in 1995 were less than half as large as those recorded in 2016-possibly due to nephelometer truncation losses in 1995. Two hygroscopicity parameterizations were investigated using high time resolution OPCRDS+CAPS PMex data, and the K ext model was more accurate than the γ model. Data from the two ambient optical instruments, the OPCRDS and the open-path nephelometer, generally agreed; however, significant discrepancies between ambient scattering and extinction were observed, apparently driven by a combination of hygroscopic growth effects, which tend to increase nephelometer truncation losses and decrease sensitivity to the wavelength difference between the two instruments as a function of particle size. There was not a statistically significant difference in the mean reconstructed extinction values obtained from the original and the revised IMPROVE (Interagency Monitoring of Protected Visual Environments) equations. On average IMPROVE reconstructed extinction was ~25% lower than extinction measured by the OPCRDS, which suggests that the IMPROVE equations and 24-hr aerosol data are moderately successful in estimating current haze levels at GRSM. However, this conclusion is limited by the coarse temporal resolution and the low dynamic range of

  20. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the

  1. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    Science.gov (United States)

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  2. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    Science.gov (United States)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  3. Iterative reconstruction of SiPM light response functions in a square-shaped compact gamma camera

    Science.gov (United States)

    Morozov, A.; Alves, F.; Marcos, J.; Martins, R.; Pereira, L.; Solovov, V.; Chepel, V.

    2017-05-01

    Compact gamma cameras with a square-shaped monolithic scintillator crystal and an array of silicon photomultipliers (SiPMs) are actively being developed for applications in areas such as small animal imaging, cancer diagnostics and radiotracer guided surgery. Statistical methods of position reconstruction, which are potentially superior to the traditional centroid method, require accurate knowledge of the spatial response of each photomultiplier. Using both Monte Carlo simulations and experimental data obtained with a camera prototype, we show that the spatial response of all photomultipliers (light response functions) can be parameterized with axially symmetric functions obtained iteratively from flood field irradiation data. The study was performed with a camera prototype equipped with a 30  ×  30  ×  2 mm3 LYSO crystal and an 8  ×  8 array of SiPMs for 140 keV gamma rays. The simulations demonstrate that the images, reconstructed with the maximum likelihood method using the response obtained with the iterative approach, exhibit only minor distortions: the average difference between the reconstructed and the true positions in X and Y directions does not exceed 0.2 mm in the central area of 22  ×  22 mm2 and 0.4 mm at the periphery of the camera. A similar level of image distortions is shown experimentally with the camera prototype.

  4. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  5. Reconstruction of data for an experiment using multi-gap spark chambers with six-camera optics

    International Nuclear Information System (INIS)

    Maybury, R.; Daley, H.M.

    1983-06-01

    A program has been developed to reconstruct spark positions in a pair of multi-gap optical spark chambers viewed by six cameras, which were used by a Rutherford Laboratory experiment. The procedure for correlating camera views to calculate spark positions is described. Calibration of the apparatus, and the application of time- and intensity-dependent corrections are discussed. (author)

  6. 3D RECONSTRUCTION OF AN UNDERWATER ARCHAELOGICAL SITE: COMPARISON BETWEEN LOW COST CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Capra

    2015-04-01

    Full Text Available The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf. Results of tests made on submerged objects with three cameras are presented: © Canon Power Shot G12, © Intova Sport HD e © GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with © Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  7. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    Directory of Open Access Journals (Sweden)

    Yufu Qu

    2018-01-01

    Full Text Available In order to reconstruct three-dimensional (3D structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  8. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    Science.gov (United States)

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  9. A Compton camera application for the GAMOS GEANT4-based framework

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J., E-mail: ljh@ns.ph.liv.ac.uk [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Arce, P. [Department of Basic Research, CIEMAT, Madrid (Spain); Judson, D.S.; Boston, A.J.; Boston, H.C.; Cresswell, J.R.; Dormand, J.; Jones, M.; Nolan, P.J.; Sampson, J.A.; Scraggs, D.P.; Sweeney, A. [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom)

    2012-04-11

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  10. Reconstruction in PET cameras with irregular sampling and depth of interaction capability

    International Nuclear Information System (INIS)

    Virador, P.R.G.; Moses, W.W.; Huesman, R.H.

    1998-01-01

    The authors present 2D reconstruction algorithms for a rectangular PET camera capable of measuring depth of interaction (DOI). The camera geometry leads to irregular radial and angular sampling of the tomographic data. DOI information increases sampling density, allowing the use of evenly spaced quarter-crystal width radial bins with minimal interpolation of irregularly spaced data. In the regions where DOI does not increase sampling density (chords normal to crystal faces), fine radial sinogram binning leads to zero efficiency bins if uniform angular binning is used. These zero efficiency sinogram bins lead to streak artifacts if not corrected. To minimize these unnormalizable sinogram bins the authors use two angular binning schemes: Fixed Width and Natural Width. Fixed Width uses a fixed angular width except in the problem regions where appropriately chosen widths are applied. Natural Width uses angle widths which are derived from intrinsic detector sampling. Using a modified filtered-backprojection algorithm to accommodate these angular binning schemes, the authors reconstruct artifact free images with nearly isotropic and position independent spatial resolution. Results from Monte Carlo data indicate that they have nearly eliminated image degradation due to crystal penetration

  11. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  12. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    Science.gov (United States)

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  13. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  14. Contribution to the reconstruction of scenes made of cylindrical and polyhedral objects from sequences of images obtained by a moving camera

    International Nuclear Information System (INIS)

    Viala, Marc

    1992-01-01

    Environment perception is an important process which enables a robot to perform actions in an unknown scene. Although many sensors exist to 'give sight', the camera seems to play a leading part. This thesis deals with the reconstruction of scenes made of cylindrical and polyhedral objects from sequences of images provided by a moving camera. Two methods are presented. Both are based on the evolution of apparent contours of objects in a sequence. The first approach has been developed considering that camera motion is known. Despite the good results obtained by this method, the specific conditions it requires makes its use limited. In order to avoid an accurate evaluation of camera motion, we introduce another method allowing, at the same time, to estimate the object parameters and camera positions. In this approach, only is needed a 'poor' knowledge of camera displacements supplied by the control system of the robotic platform, in which the camera is embedded. An optimal integration of a priori information, as well as the dynamic feature of the state model to estimate, lead us to use the Kalman filter. Experiments conducted with synthetic and real images proved the reliability of these methods. Camera calibration set-up is also suggested to achieve the most accurate scene models resulting from reconstruction processes. (author) [fr

  15. ITEM-QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    CERN Document Server

    Pauli, Josef; Anton, G

    2002-01-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both (the algorithm as well as the implementation (http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License (http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  16. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  17. Recursive 3D-reconstruction of structured scenes using a moving camera - application to robotics

    International Nuclear Information System (INIS)

    Boukarri, Bachir

    1989-01-01

    This thesis is devoted to the perception of a structured environment, and proposes a new method which allows a 3D-reconstruction of an interesting part of the world using a mobile camera. Our work is divided into three essential parts dedicated to 2D-information aspect, 3D-information aspect, and a validation of the method. In the first part, we present a method which produces a topologic and geometric image representation based on 'segment' and 'junction' features. Then, a 2D-matching method based on a hypothesis prediction and verification algorithm is proposed to match features issued from two successive images. The second part deals with 3D-reconstruction using a triangulation technique, and discuses our new method introducing an 'Estimation-Construction-Fusion' process. This ensures a complete and accurate 3D-representation, and a permanent position estimation of the camera with respect to the model. The merging process allows refinement of the 3D-representation using a powerful tool: a Kalman Filter. In the last part, experimental results issued from simulated and real data images are reported to show the efficiency of the method. (author) [fr

  18. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  19. The image camera of the 17 m diameter air Cherenkov telescope MAGIC

    CERN Document Server

    Ostankov, A P

    2001-01-01

    The image camera of the 17 m diameter MAGIC telescope, an air Cherenkov telescope currently under construction to be installed at the Canary island La Palma, is described. The main goal of the experiment is to cover the unexplored energy window from approx 10 to approx 300 GeV in gamma-ray astrophysics. In its first phase with a classical PMT camera the MAGIC telescope is expected to reach an energy threshold of approx 30 GeV. The operational conditions, the special characteristics of the developed PMTs and their use with light concentrators, the fast signal transfer scheme using analog optical links, the trigger and DAQ organization as well as image reconstruction strategy are described. The different paths being explored towards future camera improvements, in particular the constraints in using silicon avalanche photodiodes and GaAsP hybrid photodetectors in air Cherenkov telescopes are discussed.

  20. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Science.gov (United States)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  1. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Directory of Open Access Journals (Sweden)

    K. Thoeni

    2014-06-01

    Full Text Available This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS. Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp, iPhone 4S (8 Mp, Panasonic Lumix LX5 (9.5 Mp, Panasonic Lumix ZS20 (14.1 Mp and Canon EOS 7D (18 Mp. The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  2. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  3. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  4. Path integration guided with a quality map for shape reconstruction in the fringe reflection technique

    Science.gov (United States)

    Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu

    2018-04-01

    A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.

  5. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  6. Reconstructing the landing trajectory of the CE-3 lunar probe by using images from the landing camera

    International Nuclear Information System (INIS)

    Liu Jian-Jun; Yan Wei; Li Chun-Lai; Tan Xu; Ren Xin; Mu Ling-Li

    2014-01-01

    An accurate determination of the landing trajectory of Chang'e-3 (CE-3) is significant for verifying orbital control strategy, optimizing orbital planning, accurately determining the landing site of CE-3 and analyzing the geological background of the landing site. Due to complexities involved in the landing process, there are some differences between the planned trajectory and the actual trajectory of CE-3. The landing camera on CE-3 recorded a sequence of the landing process with a frequency of 10 frames per second. These images recorded by the landing camera and high-resolution images of the lunar surface are utilized to calculate the position of the probe, so as to reconstruct its precise trajectory. This paper proposes using the method of trajectory reconstruction by Single Image Space Resection to make a detailed study of the hovering stage at a height of 100 m above the lunar surface. Analysis of the data shows that the closer CE-3 came to the lunar surface, the higher the spatial resolution of images that were acquired became, and the more accurately the horizontal and vertical position of CE-3 could be determined. The horizontal and vertical accuracies were 7.09 m and 4.27 m respectively during the hovering stage at a height of 100.02 m. The reconstructed trajectory can reflect the change in CE-3's position during the powered descent process. A slight movement in CE-3 during the hovering stage is also clearly demonstrated. These results will provide a basis for analysis of orbit control strategy, and it will be conducive to adjustment and optimization of orbit control strategy in follow-up missions

  7. Summer Student Project Report. Parallelization of the path reconstruction algorithm for the inner detector of the ATLAS experiment.

    CERN Document Server

    Maldonado Puente, Bryan Patricio

    2014-01-01

    The inner detector of the ATLAS experiment has two types of silicon detectors used for tracking: Pixel Detector and SCT (semiconductor tracker). Once a proton-proton collision occurs, the result- ing particles pass through these detectors and these are recorded as hits on the detector surfaces. A medium to high energy particle passes through seven different surfaces of the two detectors, leaving seven hits, while lower energy particles can leave many more hits as they circle through the detector. For a typical event during the expected operational conditions, there are 30 000 hits in average recorded by the sensors. Only high energy particles are of interest for physics analysis and are taken into account for the path reconstruction; thus, a filtering process helps to discard the low energy particles produced in the collision. The following report presents a solution for increasing the speed of the filtering process in the path reconstruction algorithm.

  8. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    Science.gov (United States)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  9. Reconstruction of recycling flux from synthetic camera images, evaluated for the Wendelstein 7-X startup limiter

    Science.gov (United States)

    Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team

    2017-12-01

    The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\

  10. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  11. A multi-criteria approach to camera motion design for volume data animation.

    Science.gov (United States)

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  12. Holographic interferometry using a digital photo-camera

    International Nuclear Information System (INIS)

    Sekanina, H.; Hledik, S.

    2001-01-01

    The possibilities of running digital holographic interferometry using commonly available compact digital zoom photo-cameras are studied. The recently developed holographic setup, suitable especially for digital photo-cameras equipped with an un detachable object lens, is used. The method described enables a simple and straightforward way of both recording and reconstructing of a digital holographic interferograms. The feasibility of the new method is verified by digital reconstruction of the interferograms acquired, using a numerical code based on the fast Fourier transform. Experimental results obtained are presented and discussed. (authors)

  13. Structured light optical microscopy for three-dimensional reconstruction of technical surfaces

    Science.gov (United States)

    Kettel, Johannes; Reinecke, Holger; Müller, Claas

    2016-04-01

    In microsystems technology quality control of micro structured surfaces with different surface properties is playing an ever more important role. The process of quality control incorporates three-dimensional (3D) reconstruction of specularand diffusive reflecting technical surfaces. Due to the demand on high measurement accuracy and data acquisition rates, structured light optical microscopy has become a valuable solution to solve this problem providing high vertical and lateral resolution. However, 3D reconstruction of specular reflecting technical surfaces still remains a challenge to optical measurement principles. In this paper we present a measurement principle based on structured light optical microscopy which enables 3D reconstruction of specular- and diffusive reflecting technical surfaces. It is realized using two light paths of a stereo microscope equipped with different magnification levels. The right optical path of the stereo microscope is used to project structured light onto the object surface. The left optical path is used to capture the structured illuminated object surface with a camera. Structured light patterns are generated by a Digital Light Processing (DLP) device in combination with a high power Light Emitting Diode (LED). Structured light patterns are realized as a matrix of discrete light spots to illuminate defined areas on the object surface. The introduced measurement principle is based on multiple and parallel processed point measurements. Analysis of the measured Point Spread Function (PSF) by pattern recognition and model fitting algorithms enables the precise calculation of 3D coordinates. Using exemplary technical surfaces we demonstrate the successful application of our measurement principle.

  14. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  15. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  16. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  17. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  18. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  19. A space-time tomography algorithm for the five-camera soft X-ray diagnostic at RTP

    Energy Technology Data Exchange (ETDEWEB)

    Lyadina, E.S.; Tanzi, C.P.; Cruz, D.F. da; Donne, A.J.H. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands)

    1993-12-31

    A five-camera soft x-ray with 80 detector channels has been installed on the RTP tokamak with the object of studying MHD processes with a relatively high poloidal mode number (m=4). Numerical tomographic reconstruction algorithms used to reconstruct the plasma emissivity profile are constrained by the characteristics of the system. Especially high poloidal harmonics, which can be resolved due to the high number of cameras, can be strongly distorted by stochastic and systematic errors. Furthermore, small uncertainties in the relative position of the cameras in a multiple camera system can lead to strong artefacts in the reconstruction. (author) 6 refs., 4 figs.

  20. 3D reconstruction based on light field images

    Science.gov (United States)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  1. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  2. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an

  3. Unified framework for recognition, localization and mapping using wearable cameras.

    Science.gov (United States)

    Vázquez-Martín, Ricardo; Bandera, Antonio

    2012-08-01

    Monocular approaches to simultaneous localization and mapping (SLAM) have recently addressed with success the challenging problem of the fast computation of dense reconstructions from a single, moving camera. Thus, if these approaches initially relied on the detection of a reduced set of interest points to estimate the camera position and the map, they are currently able to reconstruct dense maps from a handheld camera while the camera coordinates are simultaneously computed. However, these maps of 3-dimensional points usually remain meaningless, that is, with no memorable items and without providing a way of encoding spatial relationships between objects and paths. In humans and mobile robotics, landmarks play a key role in the internalization of a spatial representation of an environment. They are memorable cues that can serve to define a region of the space or the location of other objects. In a topological representation of the space, landmarks can be identified and located according to its structural, perceptive or semantic significance and distinctiveness. But on the other hand, landmarks may be difficult to be located in a metric representation of the space. Restricted to the domain of visual landmarks, this work describes an approach where the map resulting from a point-based, monocular SLAM is annotated with the semantic information provided by a set of distinguished landmarks. Both features are obtained from the image. Hence, they can be linked by associating to each landmark all those point-based features that are superimposed to the landmark in a given image (key-frame). Visual landmarks will be obtained by means of an object-based, bottom-up attention mechanism, which will extract from the image a set of proto-objects. These proto-objects could not be always associated with natural objects, but they will typically constitute significant parts of these scene objects and can be appropriately annotated with semantic information. Moreover, they will be

  4. Realistic camera noise modeling with application to improved HDR synthesis

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Aelterman, Jan; Pižurica, Aleksandra; Philips, Wilfried

    2012-12-01

    Due to the ongoing miniaturization of digital camera sensors and the steady increase of the "number of megapixels", individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise, only limited attention has gone to more realistic digital camera noise models. In this article, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, post-processing,.. We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range (LDR) exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is near-optimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings.

  5. Reduction in camera-specific variability in [{sup 123}I]FP-CIT SPECT outcome measures by image reconstruction optimized for multisite settings: impact on age-dependence of the specific binding ratio in the ENC-DAT database of healthy controls

    Energy Technology Data Exchange (ETDEWEB)

    Buchert, Ralph; Lange, Catharina [Charite - Universitaetsmedizin Berlin, Department of Nuclear Medicine, Berlin (Germany); Kluge, Andreas; Bronzel, Marcus [ABX-CRO advanced pharmaceutical services Forschungsgesellschaft m.b.H., Dresden (Germany); Tossici-Bolt, Livia [University Hospital Southampton NHS Foundation Trust, Department of Medical Physics, Southampton (United Kingdom); Dickson, John [University College London Hospital NHS Foundation Trust, Institute of Nuclear Medicine, London (United Kingdom); Asenbaum, Susanne [Medical University of Vienna, Department of Nuclear Medicine, Vienna (Austria); Booij, Jan [University of Amsterdam, Department of Nuclear Medicine, Academic Medical Centre, Amsterdam (Netherlands); Kapucu, L. Oezlem Atay [Gazi University, Department of Nuclear Medicine, Faculty of Medicine, Ankara (Turkey); Svarer, Claus [Rigshospitalet and University of Copenhagen, Neurobiology Research Unit, Copenhagen (Denmark); Koulibaly, Pierre-Malick [University of Nice-Sophia Antipolis, Nuclear Medicine Department, Centre Antoine Lacassagne, Nice (France); Nobili, Flavio [University of Genoa, Department of Neuroscience (DINOGMI), Clinical Neurology Unit, Genoa (Italy); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Sabri, Osama [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Sera, Terez [University of Szeged, Department of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Tatsch, Klaus [Municipal Hospital of Karlsruhe Inc, Department of Nuclear Medicine, Karlsruhe (Germany); Borght, Thierry vander [CHU Namur, IREC, Nuclear Medicine Division, Universite catholique de Louvain, Yvoir (Belgium); Laere, Koen van [University Hospital and K.U. Leuven, Nuclear Medicine, Leuven (Belgium); Varrone, Andrea [Karolinska University Hospital, Department of Clinical Neuroscience, Centre for Psychiatry Research, Karolinska Institutet, Stockholm (Sweden); Iida, Hidehiro [National Cerebral and Cardiovascular Center - Research Institute, Osaka (Japan)

    2016-07-15

    Quantitative estimates of dopamine transporter availability, determined with [{sup 123}I]FP-CIT SPECT, depend on the SPECT equipment, including both hardware and (reconstruction) software, which limits their use in multicentre research and clinical routine. This study tested a dedicated reconstruction algorithm for its ability to reduce camera-specific intersubject variability in [{sup 123}I]FP-CIT SPECT. The secondary aim was to evaluate binding in whole brain (excluding striatum) as a reference for quantitative analysis. Of 73 healthy subjects from the European Normal Control Database of [{sup 123}I]FP-CIT recruited at six centres, 70 aged between 20 and 82 years were included. SPECT images were reconstructed using the QSPECT software package which provides fully automated detection of the outer contour of the head, camera-specific correction for scatter and septal penetration by transmission-dependent convolution subtraction, iterative OSEM reconstruction including attenuation correction, and camera-specific ''to kBq/ml'' calibration. LINK and HERMES reconstruction were used for head-to-head comparison. The specific striatal [{sup 123}I]FP-CIT binding ratio (SBR) was computed using the Southampton method with binding in the whole brain, occipital cortex or cerebellum as the reference. The correlation between SBR and age was used as the primary quality measure. The fraction of SBR variability explained by age was highest (1) with QSPECT, independently of the reference region, and (2) with whole brain as the reference, independently of the reconstruction algorithm. QSPECT reconstruction appears to be useful for reduction of camera-specific intersubject variability of [{sup 123}I]FP-CIT SPECT in multisite and single-site multicamera settings. Whole brain excluding striatal binding as the reference provides more stable quantitative estimates than occipital or cerebellar binding. (orig.)

  6. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  7. CCD camera system for use with a streamer chamber

    International Nuclear Information System (INIS)

    Angius, S.A.; Au, R.; Crawley, G.C.; Djalali, C.; Fox, R.; Maier, M.; Ogilvie, C.A.; Molen, A. van der; Westfall, G.D.; Tickle, R.S.

    1988-01-01

    A system based on three charge-coupled-device (CCD) cameras is described here. It has been used to acquire images from a streamer chamber and consists of three identical subsystems, one for each camera. Each subsystem contains an optical lens, CCD camera head, camera controller, an interface between the CCD and a microprocessor, and a link to a minicomputer for data recording and on-line analysis. Image analysis techniques have been developed to enhance the quality of the particle tracks. Some steps have been made to automatically identify tracks and reconstruct the event. (orig.)

  8. Omnidirectional sparse visual path following with occlusion-robust feature tracking

    OpenAIRE

    Goedemé, Toon; Tuytelaars, Tinne; Van Gool, Luc; Vanacker, Gerolf; Nuttin, Marnix

    2005-01-01

    Goedemé T., Tuytelaars T., Van Gool L., Vanacker G., Nuttin M., ''Omnidirectional sparse visual path following with occlusion-robust feature tracking'', Proceedings 6th workshop on omnidirectional vision, camera networks and non-classical cameras, 8 pp., October 21, 2005, Beijing, China.

  9. Reconstruction of 3D PIV data in complicated experimental arrangements

    Directory of Open Access Journals (Sweden)

    Pavlík David

    2017-01-01

    Full Text Available In this paper a three-dimensional reconstruction of flow field behind flat plate representing a wing is presented. The reconstruction is always performed for pair of 2D vector maps obtained by 3D PIV with two cameras which record measurement area from different locations. Three-dimensional reconstruction can be obtained in various ways. This paper summarizes two: the reconstruction based on the known correspondences and the reconstruction based on the knowledge of intrinsic and extrinsic parameters of cameras. The methods can be used in the cases when it is impossible to use a calibration pattern or when reconstruction by commercial software fails.

  10. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  11. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)

    2011-02-07

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  12. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Science.gov (United States)

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  13. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  14. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  15. Vision-based path following using the 1D trifocal tensor

    CSIR Research Space (South Africa)

    Sabatta, D

    2013-05-01

    Full Text Available In this paper we present a vision-based path following algorithm for a non-holonomic wheeled platform capable of keeping the vehicle on a desired path using only a single camera. The algorithm is suitable for teach and replay or leader...

  16. Distance weighting for improved tomographic reconstructions

    International Nuclear Information System (INIS)

    Koeppe, R.A.; Holden, J.E.

    1984-01-01

    An improved method for the reconstruction of emission computed axial tomography images has been developed. The method is a modification of filtered back-projection, where the back projected values are weighted to reflect the loss of formation, with distance from the camera, which is inherent in gamma camera imaging. This information loss is a result of: loss of spatial resolution with distance, attenuation, and scatter. The weighting scheme can best be described by considering the contributions of any two opposing views to the reconstruction image pixels. The weight applied to the projections of one view is set to equal the relative amount of the original activity that was initially received in that projection, assuming a uniform attenuating medium. This yields a weighting value which is a function of distance into the image with a value of one for pixels ''near the camera'', a value of .5 at the image center, and a value of zero on the opposite side. Tomographic reconstructions produced with this method show improved spatial resolution when compared to conventional 360 0 reconstructions. The improvement is in the tangential direction, where simulations have indicated a FWHM improvement of 1 to 1.5 millimeters. The resolution in the radial direction is essentially the same for both methods. Visual inspection of the reconstructed images show improved resolution and contrast

  17. Aircraft path planning for optimal imaging using dynamic cost functions

    Science.gov (United States)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  18. Depth profile measurement with lenslet images of the plenoptic camera

    Science.gov (United States)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  19. A SPECT demonstrator—revival of a gamma camera

    Science.gov (United States)

    Valastyán, I.; Kerek, A.; Molnár, J.; Novák, D.; Végh, J.; Emri, M.; Trón, L.

    2006-07-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied.

  20. A SPECT demonstrator-revival of a gamma camera

    International Nuclear Information System (INIS)

    Valastyan, I.; Kerek, A.; Molnar, J.; Novak, D.; Vegh, J.; Emri, M.; Tron, L.

    2006-01-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied

  1. Cerebral imaging using 68Ga DTPA and the U.C.S.F. multiwire proportional chamber positron camera

    International Nuclear Information System (INIS)

    Hattner, R.S.; Lim, C.B.; Swann, S.J.; Kaufman, L.; Perez-Mendez, V.; Chu, D.; Huberty, J.P.; Price, D.C.; Wilson, C.B.

    1975-12-01

    A multiwire proportional chamber positron camera consisting of four 48 x 48 cm 2 detectors linked to a small digital computer has been designed, constructed, and characterized. Initial clinical application to brain imaging using 68 Ga DTPA in 10 patients with brain tumors is described. Tomographic image reconstruction is accomplished by an algorithm determining the intersection of the annihilation photon paths in planes of interest. Final image processing utilizes uniformity correction, simple thresholding, and smoothing. The positron brain images were compared to conventional scintillation brain scans and x-ray computerized axial tomograms (CAT) in each case. The positron studies have shown significant mitigation of confusing superficial activity resulting from craniotomy in comparison to conventional brain scans. Central necrosis of lesions observed in the positron images, but not in the conventional scans, has been confirmed in CAT. Modifications of the camera are being implemented to improve image quality, and these changes combined with the tomography inherent in the positron scans are anticipated to result in images superior in information content to conventional brain scans

  2. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  3. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  4. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  5. The Compton Camera - medical imaging with higher sensitivity Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    The Compton Camera reconstructs the origin of Compton-scattered X-rays using electronic collimation with Silicon pad detectors instead of the heavy conventional lead collimators in Anger cameras - reaching up to 200 times better sensitivity and a factor two improvement in resolution. Possible applications are in cancer diagnosis, neurology neurobiology, and cardiology.

  6. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  7. Dual-camera design for coded aperture snapshot spectral imaging.

    Science.gov (United States)

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  8. Longitudinal and transverse digital image reconstruction with a tomographic scanner

    International Nuclear Information System (INIS)

    Pickens, D.R.; Price, R.R.; Erickson, J.J.; Patton, J.A.; Partain, C.L.; Rollo, F.D.

    1981-01-01

    A Siemens Gammasonics PHO/CON-192 Multiplane Imager is interfaced to a digital computer for the purpose of performing tomographic reconstructions from the data collected during a single scan. Data from the two moving gamma cameras as well as camera position information are sent to the computer by an interface designed in the authors' laboratory. Backprojection reconstruction is implemented by the computer. Longitudinal images in whole-body format as well as smaller formats are reconstructed for up to six planes simultaneously from the list mode data. Transverse reconstructions are demonstrated for 201 T1 myocardial scans. Post-reconstruction deconvolution processing to remove the blur artifact (characteristic of focal plane tomography) is applied to a multiplane phantom. Digital data acquisition of data and reconstruction of images are practical, and can extend the usefulness of the machine when compared with the film output (author)

  9. Light field reconstruction robust to signal dependent noise

    Science.gov (United States)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  10. Applying image quality in cell phone cameras: lens distortion

    Science.gov (United States)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  11. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  12. Versatility of the CFR algorithm for limited angle reconstruction

    International Nuclear Information System (INIS)

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1990-01-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant

  13. Ectomography - a tomographic method for gamma camera imaging

    International Nuclear Information System (INIS)

    Dale, S.; Edholm, P.E.; Hellstroem, L.G.; Larsson, S.

    1985-01-01

    In computerised gamma camera imaging the projections are readily obtained in digital form, and the number of picture elements may be relatively few. This condition makes emission techniques suitable for ectomography - a tomographic technique for directly visualising arbitrary sections of the human body. The camera rotates around the patient to acquire different projections in a way similar to SPECT. This method differs from SPECT, however, in that the camera is placed at an angle to the rotational axis, and receives two-dimensional, rather than one-dimensional, projections. Images of body sections are reconstructed by digital filtration and combination of the acquired projections. The main advantages of ectomography - a high and uniform resolution, a low and uniform attenuation and a high signal-to-noise ratio - are obtained when imaging sections close and parallel to a body surface. The filtration eliminates signals representing details outside the section and gives the section a certain thickness. Ectomographic transverse images of a line source and of a human brain have been reconstructed. Details within the sections are correctly visualised and details outside are effectively eliminated. For comparison, the same sections have been imaged with SPECT. (author)

  14. Three-Dimensional Reconstruction Optical System Using Shadows Triangulation

    Science.gov (United States)

    Barba, J. Leiner; Vargas, Q. Lorena; Torres, M. Cesar; Mattos, V. Lorenzo

    2008-04-01

    In this work is developed a three-dimensional reconstruction system using the Shades3D tool of the Matlab® Programming Language and materials of low cost, such as webcam camera, a stick, a weak structured lighting system composed by a desk lamp, and observation plane in which the object is located. The reconstruction is obtained through a triangulation process that is executed after acquiring a sequence of images of the scene with a shadow projected on the object; additionally an image filtering process is done for obtaining only the part of the scene that will be reconstructed. Previously, it is necessary to develop a calibration process for determining the internal camera geometric and optical characteristics (intrinsic parameters), and the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters). The lamp and the stick are used to produce a shadow which scans the object; in this technique, it is not necessary to know the position of the light source, instead the triangulation is obtained using shadow plane produced by intersection between the stick and the illumination pattern. The webcam camera captures all images with the shadow scanning the object, and Shades3D tool processes all information taking into account captured images and calibration parameters. Likewise, this technique is evaluated in the reconstruction of parts of the human body and its application in the detection of external abnormalities and elaboration of prosthesis or implant.

  15. Semantically Documenting Virtual Reconstruction: Building a Path to Knowledge Provenance

    Science.gov (United States)

    Bruseker, G.; Guillem, A.; Carboni, N.

    2015-08-01

    The outcomes of virtual reconstructions of archaeological monuments are not just images for aesthetic consumption but rather present a scholarly argument and decision making process. They are based on complex chains of reasoning grounded in primary and secondary evidence that enable a historically probable whole to be reconstructed from the partial remains left in the archaeological record. This paper will explore the possibilities for documenting and storing in an information system the phases of the reasoning, decision and procedures that a modeler, with the support of an archaeologist, uses during the virtual reconstruction process and how they can be linked to the reconstruction output. The goal is to present a documentation model such that the foundations of evidence for the reconstructed elements, and the reasoning around them, are made not only explicit and interrogable but also can be updated, extended and reused by other researchers in future work. Using as a case-study the reconstruction of a kitchen in a Roman domus in Grand, we will examine the necessary documentation requirements, and the capacity to express it using semantic technologies. For our study we adopt the CIDOC-CRM ontological model, and its extensions CRMinf, CRMBa and CRMgeo as a starting point for modelling the arguments and relations.

  16. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    Science.gov (United States)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  17. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  18. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  19. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  20. Contribution to the tracking and the 3D reconstruction of scenes composed of torus from image sequences a acquired by a moving camera; Contribution au suivi et a la reconstruction de scenes constituees d`objet toriques a partir de sequences d`images acquises par une camera mobile

    Energy Technology Data Exchange (ETDEWEB)

    Naudet, S

    1997-01-31

    The three-dimensional perception of the environment is often necessary for a robot to correctly perform its tasks. One solution, based on the dynamic vision, consists in analysing time-varying monocular images to estimate the spatial geometry of the scene. This thesis deals with the reconstruction of torus by dynamic vision. Though this object class is restrictive, it enables to tackle the problem of reconstruction of bent pipes usually encountered in industrial environments. The proposed method is based on the evolution of apparent contours of objects in the sequence. Using the expression of torus limb boundaries, it is possible to recursively estimate the object three-dimensional parameters by minimising the error between the predicted projected contours and the image contours. This process, which is performed by a Kalman filter, does not need a precise knowledge of the camera displacement or any matching of the tow limbs belonging to the same object. To complete this work, temporal tracking of objects which deals with occlusion situations is proposed. The approach consists in modeling and interpreting the apparent motion of objects in the successive images. The motion interpretation, based on a simplified representation of the scene, allows to recover pertinent three-dimensional information which is used to manage occlusion situations. Experiments, on synthetic and real images, proves he validity of the tracking and the reconstruction processes. (author) 127 refs.

  1. Resolution recovery for Compton camera using origin ensemble algorithm.

    Science.gov (United States)

    Andreyev, A; Celler, A; Ozsahin, I; Sitek, A

    2016-08-01

    Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions

  2. Resolution recovery for Compton camera using origin ensemble algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Andreyev, A. [Philips Healthcare, Highland Heights, Ohio 44143 (United States); Celler, A. [Medical Imaging Research Group, University of British Columbia and Vancouver Coastal Health Research Institute, Vancouver, BC V5Z 1M9 (Canada); Ozsahin, I.; Sitek, A., E-mail: sarkadiu@gmail.com [Gordon Center for Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 and Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2016-08-15

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image

  3. Resolution recovery for Compton camera using origin ensemble algorithm

    International Nuclear Information System (INIS)

    Andreyev, A.; Celler, A.; Ozsahin, I.; Sitek, A.

    2016-01-01

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image

  4. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  5. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Directory of Open Access Journals (Sweden)

    Gustavo R D Bernardina

    Full Text Available Action sport cameras (ASC are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720 and 1.5mm (1920×1080. The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  6. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  7. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  8. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    Science.gov (United States)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  9. Design of a Compton camera for 3D prompt-{gamma} imaging during ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Roellinghoff, F., E-mail: roelling@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Richard, M.-H., E-mail: mrichard@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Chevallier, M.; Constanzo, J.; Dauvergne, D. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Freud, N. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Henriquet, P.; Le Foulher, F. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Letang, J.M. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Montarou, G. [LPC, CNRS/IN2P3, Clermont-F. University (France); Ray, C.; Testa, E.; Testa, M. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Walenta, A.H. [Uni-Siegen, FB Physik, Emmy-Noether Campus, D-57068 Siegen (Germany)

    2011-08-21

    We investigate, by means of Geant4 simulations, a real-time method to control the position of the Bragg peak during ion therapy, based on a Compton camera in combination with a beam tagging device (hodoscope) in order to detect the prompt gamma emitted during nuclear fragmentation. The proposed set-up consists of a stack of 2 mm thick silicon strip detectors and a LYSO absorber detector. The {gamma} emission points are reconstructed analytically by intersecting the ion trajectories given by the beam hodoscope and the Compton cones given by the camera. The camera response to a polychromatic point source in air is analyzed with regard to both spatial resolution and detection efficiency. Various geometrical configurations of the camera have been tested. In the proposed configuration, for a typical polychromatic photon point source, the spatial resolution of the camera is about 8.3 mm FWHM and the detection efficiency 2.5x10{sup -4} (reconstructable photons/emitted photons in 4{pi}). Finally, the clinical applicability of our system is considered and possible starting points for further developments of a prototype are discussed.

  10. Image Reconstruction. Chapter 13

    Energy Technology Data Exchange (ETDEWEB)

    Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)

    2014-12-15

    This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.

  11. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  12. D-SPECT, a semiconductor camera: Technical aspects and clinical applications

    International Nuclear Information System (INIS)

    Merlin, C.; Bertrand, S.; Kelly, A.; Veyre, A.; Mestas, D.; Cachin, F.; Motreff, P.; Levesque, S.; Cachin, F.; Askienazy, S.

    2010-01-01

    Clinical practice in nuclear medicine has largely changed in the last decade, particularly with the arrival of PET/CT and SPECT/CT. New semiconductor cameras could represent the next evolution in our nuclear medicine practice. Due to the resolution and sensitivity improvement, this technology authorizes fast speed acquisitions, high contrast and resolution images performed with low activity injection. The dedicated cardiology D-SPECT camera (Spectrum Dynamics, Israel) is based on semiconductor technology and provides an original system for collimation and images reconstruction. We describe here our clinical experience in using the D-SPECT with a preliminary study comparing D-D.P.E.C.T. and conventional gamma camera. (authors)

  13. Camera calibration based on the back projection process

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  14. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  15. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  16. Time-of-flight cameras principles, methods and applications

    CERN Document Server

    Hansard, Miles; Choi, Ouk; Horaud, Radu

    2012-01-01

    Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in order to increase the scene coverage, and to combine the depth data with images from several colour came

  17. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    DEFF Research Database (Denmark)

    Kristoffersen, Miklas Strøm; Dueholm, Jacob Velling; Gade, Rikke

    2016-01-01

    and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm......The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions...

  18. Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum

    Directory of Open Access Journals (Sweden)

    Brahmastro Kresnaraman

    2016-04-01

    Full Text Available During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA. The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations.

  19. Tomographic image reconstruction using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Paschalis, P.; Giokaris, N.D.; Karabarbounis, A.; Loudos, G.K.; Maintas, D.; Papanicolas, C.N.; Spanoudaki, V.; Tsoumpas, Ch.; Stiliaris, E.

    2004-01-01

    A new image reconstruction technique based on the usage of an Artificial Neural Network (ANN) is presented. The most crucial factor in designing such a reconstruction system is the network architecture and the number of the input projections needed to reconstruct the image. Although the training phase requires a large amount of input samples and a considerable CPU time, the trained network is characterized by simplicity and quick response. The performance of this ANN is tested using several image patterns. It is intended to be used together with a phantom rotating table and the γ-camera of IASA for SPECT image reconstruction

  20. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  1. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  2. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T [Teikyo University, Itabashi-ku, Tokyo (Japan); Haga, A; Saotome, N [University of Tokyo Hospital, Bunkyo-ku, Tokyo (Japan); Arai, N [Teikyo University Hospital, Itabashi-ku, Tokyo (Japan)

    2014-06-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  3. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    International Nuclear Information System (INIS)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N

    2014-01-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  4. TREE STEM RECONSTRUCTION USING VERTICAL FISHEYE IMAGES: A PRELIMINARY STUDY

    Directory of Open Access Journals (Sweden)

    A. Berveglieri

    2016-06-01

    Full Text Available A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.

  5. Remote removal of an obstruction from FFTF [Fast Flux Test Facility] in-service inspection camera track

    International Nuclear Information System (INIS)

    Gibbons, P.W.

    1990-11-01

    Remote techniques and special equipment were used to clear the path of a closed-circuit television camera system that travels on a monorail track around the reactor vessel support arm structure. A tangle of wire-wrapped instrumentation tubing had been inadvertently inserted through a dislocated guide-tube expansion joint and into the camera track area. An externally driven auger device, mounted on the track ahead of the camera to view the procedure, was used to retrieve the tubing. 6 figs

  6. Dynamic imaging with coincidence gamma camera

    International Nuclear Information System (INIS)

    Elhmassi, Ahmed

    2008-01-01

    In this paper we develop a technique to calculate dynamic parameters from data acquired using gamma-camera PET (gc PET). Our method is based on an algorithm development for dynamic SPECT, which processes all decency projection data simultaneously instead of reconstructing a series of static images individually. The algorithm was modified to account for the extra data that is obtained with gc PET (compared with SPEC). The method was tested using simulated projection data for both a SPECT and a gc PET geometry. These studies showed the ability of the code to reconstruct simulated data with a varying range of half-lives. The accuracy of the algorithm was measured in terms of the reconstructed half-life and initial activity for the simulated object. The reconstruction of gc PET data showed improvement in half-life and activity compared to SPECT data of 23% and 20%, respectively (at 50 iterations). The gc PET algorithm was also tested using data from an experimental phantom and finally, applied to a clinical dataset, where the algorithm was further modified to deal with the situation where the activity in certain pixels decreases and then increases during the acquisition. (author)

  7. Contribution to the tracking and the 3D reconstruction of scenes composed of torus from image sequences a acquired by a moving camera

    International Nuclear Information System (INIS)

    Naudet, S.

    1997-01-01

    The three-dimensional perception of the environment is often necessary for a robot to correctly perform its tasks. One solution, based on the dynamic vision, consists in analysing time-varying monocular images to estimate the spatial geometry of the scene. This thesis deals with the reconstruction of torus by dynamic vision. Though this object class is restrictive, it enables to tackle the problem of reconstruction of bent pipes usually encountered in industrial environments. The proposed method is based on the evolution of apparent contours of objects in the sequence. Using the expression of torus limb boundaries, it is possible to recursively estimate the object three-dimensional parameters by minimising the error between the predicted projected contours and the image contours. This process, which is performed by a Kalman filter, does not need a precise knowledge of the camera displacement or any matching of the tow limbs belonging to the same object. To complete this work, temporal tracking of objects which deals with occlusion situations is proposed. The approach consists in modeling and interpreting the apparent motion of objects in the successive images. The motion interpretation, based on a simplified representation of the scene, allows to recover pertinent three-dimensional information which is used to manage occlusion situations. Experiments, on synthetic and real images, proves he validity of the tracking and the reconstruction processes. (author)

  8. Development of a SiPM Camera for a Schwarzschild-Couder Cherenkov Telescope for the Cherenkov Telescope Array

    CERN Document Server

    Otte, A N; Dickinson, H.; Funk, S.; Jogler, T.; Johnson, C.A.; Karn, P.; Meagher, K.; Naoya, H.; Nguyen, T.; Okumura, A.; Santander, M.; Sapozhnikov, L.; Stier, A.; Tajima, H.; Tibaldo, L.; Vandenbroucke, J.; Wakely, S.; Weinstein, A.; Williams, D.A.

    2015-01-01

    We present the development of a novel 11328 pixel silicon photomultiplier (SiPM) camera for use with a ground-based Cherenkov telescope with Schwarzschild-Couder optics as a possible medium-sized telescope for the Cherenkov Telescope Array (CTA). The finely pixelated camera samples air-shower images with more than twice the optical resolution of cameras that are used in current Cherenkov telescopes. Advantages of the higher resolution will be a better event reconstruction yielding improved background suppression and angular resolution of the reconstructed gamma-ray events, which is crucial in morphology studies of, for example, Galactic particle accelerators and the search for gamma-ray halos around extragalactic sources. Packing such a large number of pixels into an area of only half a square meter and having a fast readout directly attached to the back of the sensors is a challenging task. For the prototype camera development, SiPMs from Hamamatsu with through silicon via (TSV) technology are used. We give ...

  9. The effect of acquisition interval and spatial resolution on dynamic cardiac imaging with a stationary SPECT camera

    International Nuclear Information System (INIS)

    Roberts, J; Maddula, R; Clackdoyle, R; DiBella, E; Fu, Z

    2007-01-01

    The current SPECT scanning paradigm that acquires images by slow rotation of multiple detectors in body-contoured orbits around the patient is not suited to the rapid collection of tomographically complete data. During rapid image acquisition, mechanical and patient safety constraints limit the detector orbit to circular paths at increased distances from the patient, resulting in decreased spatial resolution. We consider a novel dynamic rotating slant-hole (DyRoSH) SPECT camera that can collect full tomographic data every 2 s, employing three stationary detectors mounted with slant-hole collimators that rotate at 30 rpm. Because the detectors are stationary, they can be placed much closer to the patient than is possible with conventional SPECT systems. We propose that the decoupling of the detector position from the mechanics of rapid image acquisition offers an additional degree of freedom which can be used to improve accuracy in measured kinetic parameter estimates. With simulations and list-mode reconstructions, we consider the effects of different acquisition intervals on dynamic cardiac imaging, comparing a conventional three detector SPECT system with the proposed DyRoSH SPECT system. Kinetic parameters of a two-compartment model of myocardial perfusion for technetium-99m-teboroxime were estimated. When compared to a conventional SPECT scanner for the same acquisition periods, the proposed DyRoSH system shows equivalent or reduced bias or standard deviation values for the kinetic parameter estimates. The DyRoSH camera with a 2 s acquisition period does not show any improvement compared to a DyRoSH camera with a 10 s acquisition period

  10. The effect of acquisition interval and spatial resolution on dynamic cardiac imaging with a stationary SPECT camera

    Science.gov (United States)

    Roberts, J.; Maddula, R.; Clackdoyle, R.; Di Bella, E.; Fu, Z.

    2007-08-01

    The current SPECT scanning paradigm that acquires images by slow rotation of multiple detectors in body-contoured orbits around the patient is not suited to the rapid collection of tomographically complete data. During rapid image acquisition, mechanical and patient safety constraints limit the detector orbit to circular paths at increased distances from the patient, resulting in decreased spatial resolution. We consider a novel dynamic rotating slant-hole (DyRoSH) SPECT camera that can collect full tomographic data every 2 s, employing three stationary detectors mounted with slant-hole collimators that rotate at 30 rpm. Because the detectors are stationary, they can be placed much closer to the patient than is possible with conventional SPECT systems. We propose that the decoupling of the detector position from the mechanics of rapid image acquisition offers an additional degree of freedom which can be used to improve accuracy in measured kinetic parameter estimates. With simulations and list-mode reconstructions, we consider the effects of different acquisition intervals on dynamic cardiac imaging, comparing a conventional three detector SPECT system with the proposed DyRoSH SPECT system. Kinetic parameters of a two-compartment model of myocardial perfusion for technetium-99m-teboroxime were estimated. When compared to a conventional SPECT scanner for the same acquisition periods, the proposed DyRoSH system shows equivalent or reduced bias or standard deviation values for the kinetic parameter estimates. The DyRoSH camera with a 2 s acquisition period does not show any improvement compared to a DyRoSH camera with a 10 s acquisition period.

  11. Optimisation of a dual head semiconductor Compton camera using Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom)], E-mail: ljh@ns.ph.liv.ac.uk; Boston, A.J.; Boston, H.C.; Cooper, R.J.; Cresswell, J.R.; Grint, A.N.; Nolan, P.J.; Oxley, D.C.; Scraggs, D.P. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom); Beveridge, T.; Gillam, J. [School of Physics and Materials Engineering, Monash University, Melbourne (Australia); Lazarus, I. [STFC Daresbury Laboratory, Warrington, Cheshire (United Kingdom)

    2009-06-01

    Conventional medical gamma-ray camera systems utilise mechanical collimation to provide information on the position of an incident gamma-ray photon. Systems that use electronic collimation utilising Compton image reconstruction techniques have the potential to offer huge improvements in sensitivity. Position sensitive high purity germanium (HPGe) detector systems are being evaluated as part of a single photon emission computed tomography (SPECT) Compton camera system. Data have been acquired from the orthogonally segmented planar SmartPET detectors, operated in Compton camera mode. The minimum gamma-ray energy which can be imaged by the current system in Compton camera configuration is 244 keV due to the 20 mm thickness of the first scatter detector which causes large gamma-ray absorption. A simulation package for the optimisation of a new semiconductor Compton camera has been developed using the Geant4 toolkit. This paper will show results of preliminary analysis of the validated Geant4 simulation for gamma-ray energies of SPECT, 141 keV.

  12. Application of 3D reconstruction system in diabetic foot ulcer injury assessment

    Science.gov (United States)

    Li, Jun; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    To deal with the considerable deviation of transparency tracing method and digital planimetry method used in current clinical diabetic foot ulcer injury assessment, this paper proposes a 3D reconstruction system which can be used to get foot model with good quality texture, then injury assessment is done by measuring the reconstructed model. The system uses the Intel RealSense SR300 depth camera which is based on infrared structured-light as input device, the required data from different view is collected by moving the camera around the scanned object. The geometry model is reconstructed by fusing the collected data, then the mesh is sub-divided to increase the number of mesh vertices and the color of each vertex is determined using a non-linear optimization, all colored vertices compose the surface texture of the reconstructed model. Experimental results indicate that the reconstructed model has millimeter-level geometric accuracy and texture with few artificial effect.

  13. A fast algorithm for computer aided collimation gamma camera (CACAO)

    Science.gov (United States)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.

    2000-08-01

    The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.

  14. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  15. The effect of truncation on very small cardiac SPECT camera systems

    International Nuclear Information System (INIS)

    Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.

    2006-01-01

    Background: The limited transaxial field-of-view (FOV) of a very small cardiac SPECT camera system causes view-dependent truncation of the projection of structures exterior to, but near the heart. Basic tomographic principles suggest that the reconstruction of non-attenuated truncated data gives a distortion-free image in the interior of the truncated region, but the DC term of the Fourier spectrum of the reconstructed image is incorrect, meaning that the intensity scale of the reconstruction is inaccurate. The purpose of this study was to characterize the reconstructed image artifacts from truncated data, and to quantify their effects on the measurement of tracer uptake in the myocardial. Particular attention was given to instances where the heart wall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heart region. Truncated and non-truncated projections were formed both with and without attenuation. The reconstructions were analyzed for artifacts in the myocardium caused by truncation, and for the effect that attenuation has relative to increasing those artifacts. Results: The inaccuracy due to truncation is primarily caused by an incorrect DC component. For visualizing the left ventricular wall, this error is not worse than the effect of attenuation. The addition of a small hot bowel-like structure near the left ventricle causes few changes in counts on the wall. Larger artifacts due to the truncation are located at the boundary of the truncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstruction results than an analytical filtered back-projection reconstruction algorithm. Conclusion: Small inaccuracies in reconstructed images from small FOV camera systems should have little effect on clinical interpretation. However, changes in the degree of inaccuracy in counts from slice to slice are due to changes in

  16. Collimator trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    Jaszczak, R.J.

    1977-01-01

    A collimator is provided for a scintillation camera system in which a detector precesses in an orbit about a patient. The collimator is designed to have high resolution and lower sensitivity with respect to radiation traveling in paths laying wholly within planes perpendicular to the cranial-caudal axis of the patient. The collimator has high sensitivity and lower resolution to radiation traveling in other planes. Variances in resolution and sensitivity are achieved by altering the length, spacing or thickness of the septa of the collimator

  17. Reflector construction by sound path curves - A method of manual reflector evaluation in the field

    International Nuclear Information System (INIS)

    Siciliano, F.; Heumuller, R.

    1985-01-01

    In order to describe the time-of-flight behavior of various reflectors we have set up models and derived from them analytical and graphic approaches to reflector reconstruction. In the course of this work, maximum achievable accuracy and possible simplifications were investigated. The aim of the time-of-flight reconstruction method is to determine the points of a reflector on the basis of a sound path function (sound path as the function of the probe index position). This method can only be used on materials which are isotropic in terms of sound velocity since the method relies on time of flight being converted into sound path. This paper deals only with two-dimensional reconstruction, in other words all statements relate to the plane of incidence. The method is based on the fact that the geometrical location of the points equidistant from a certain probe index position is a circle. If circles with radiuses equal to the associated sound path are drawn for various search unit positions the points of intersection of the circles are the desired reflector points

  18. Development of a Compton camera for prompt-gamma medical imaging

    Science.gov (United States)

    Aldawood, S.; Thirolf, P. G.; Miani, A.; Böhmer, M.; Dedes, G.; Gernhäuser, R.; Lang, C.; Liprandi, S.; Maier, L.; Marinšek, T.; Mayerhofer, M.; Schaart, D. R.; Lozano, I. Valencia; Parodi, K.

    2017-11-01

    A Compton camera-based detector system for photon detection from nuclear reactions induced by proton (or heavier ion) beams is under development at LMU Munich, targeting the online range verification of the particle beam in hadron therapy via prompt-gamma imaging. The detector is designed to be capable to reconstruct the photon source origin not only from the Compton scattering kinematics of the primary photon, but also to allow for tracking of the secondary Compton-scattered electrons, thus enabling a γ-source reconstruction also from incompletely absorbed photon events. The Compton camera consists of a monolithic LaBr3:Ce scintillation crystal, read out by a multi-anode PMT acting as absorber, preceded by a stacked array of 6 double-sided silicon strip detectors as scatterers. The detector components have been characterized both under offline and online conditions. The LaBr3:Ce crystal exhibits an excellent time and energy resolution. Using intense collimated 137Cs and 60Co sources, the monolithic scintillator was scanned on a fine 2D grid to generate a reference library of light amplitude distributions that allows for reconstructing the photon interaction position using a k-Nearest Neighbour (k-NN) algorithm. Systematic studies were performed to investigate the performance of the reconstruction algorithm, revealing an improvement of the spatial resolution with increasing photon energy to an optimum value of 3.7(1)mm at 1.33 MeV, achieved with the Categorical Average Pattern (CAP) modification of the k-NN algorithm.

  19. Simultaneous Water Vapor and Dry Air Optical Path Length Measurements and Compensation with the Large Binocular Telescope Interferometer

    Science.gov (United States)

    Defrere, D.; Hinz, P.; Downey, E.; Boehm, M.; Danchi, W. C.; Durney, O.; Ertel, S.; Hill, J. M.; Hoffmann, W. F.; Mennesson, B.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI/MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current performance of the system for dry air seeing compensation, we present simultaneous H-, K-, and N-band observations that illustrate the feasibility of our feed forward approach to stabilize the path length fluctuations seen by the LBTI nuller uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current

  20. Diagnostics and camera strobe timers for hydrogen pellet injectors

    International Nuclear Information System (INIS)

    Bauer, M.L.; Fisher, P.W.; Qualls, A.L.

    1993-01-01

    Hydrogen pellet injectors have been used to fuel fusion experimental devices for the last decade. As part of developments to improve pellet production and velocity, various diagnostic devices were implemented, ranging from witness plates to microwave mass meters to high speed photography. This paper will discuss details of the various implementations of light sources, cameras, synchronizing electronics and other diagnostic systems developed at Oak Ridge for the Tritium Proof-of-Principle (TPOP) experiment at the Los Alamos National Laboratory's Tritium System Test Assembly (TSTA), a system built for the Oak Ridge Advanced Toroidal Facility (ATF), and the Tritium Pellet Injector (TPI) built for the Princeton Tokamak Fusion Test Reactor (TFTR). Although a number of diagnostic systems were implemented on each pellet injector, the emphasis here will be on the development of a synchronization system for high-speed photography using pulsed light sources, standard video cameras, and video recorders. This system enabled near real-time visualization of the pellet shape, size and flight trajectory over a wide range of pellet speeds and at one or two positions along the flight path. Additionally, the system provides synchronization pulses to the data system for pseudo points along the flight path, such as the estimated plasma edge. This was accomplished using an electronic system that took the time measured between sets of light gates, and generated proportionally delayed triggers for light source strobes and pseudo points. Systems were built with two camera stations, one located after the end of the barrel, and a second camera located closer to the main reactor vessel wall. Two or three light gates were used to sense pellet velocity and various spacings were implemented on the three experiments. Both analog and digital schemes were examined for implementing the delay system. A digital technique was chosen

  1. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  2. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    International Nuclear Information System (INIS)

    Martins, Fabio J W A; Foucaut, Jean-Marc; Stanislas, Michel; Thomas, Lionel; Azevedo, Luis F A

    2015-01-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time. (paper)

  3. SINGLE IMAGE CAMERA CALIBRATION IN CLOSE RANGE PHOTOGRAMMETRY FOR SOLDER JOINT ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. Heinemann

    2016-06-01

    Full Text Available Printed Circuit Boards (PCB play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  4. Characterization of a PET Camera Optimized for Prostate Imaging

    International Nuclear Information System (INIS)

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi, Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-01-01

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region

  5. Spectral image reconstruction using an edge preserving spatio-spectral Wiener estimation.

    Science.gov (United States)

    Urban, Philipp; Rosen, Mitchell R; Berns, Roy S

    2009-08-01

    Reconstruction of spectral images from camera responses is investigated using an edge preserving spatio-spectral Wiener estimation. A Wiener denoising filter and a spectral reconstruction Wiener filter are combined into a single spatio-spectral filter using local propagation of the noise covariance matrix. To preserve edges the local mean and covariance matrix of camera responses is estimated by bilateral weighting of neighboring pixels. We derive the edge-preserving spatio-spectral Wiener estimation by means of Bayesian inference and show that it fades into the standard Wiener reflectance estimation shifted by a constant reflectance in case of vanishing noise. Simulation experiments conducted on a six-channel camera system and on multispectral test images show the performance of the filter, especially for edge regions. A test implementation of the method is provided as a MATLAB script at the first author's website.

  6. Reconstructing building mass models from UAV images

    KAUST Repository

    Li, Minglei; Nan, Liangliang; Smith, Neil; Wonka, Peter

    2015-01-01

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first

  7. Globally Consistent Indoor Mapping via a Decoupling Rotation and Translation Algorithm Applied to RGB-D Camera Output

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2017-10-01

    Full Text Available This paper presents a novel RGB-D 3D reconstruction algorithm for the indoor environment. The method can produce globally-consistent 3D maps for potential GIS applications. As the consumer RGB-D camera provides a noisy depth image, the proposed algorithm decouples the rotation and translation for a more robust camera pose estimation, which makes full use of the information, but also prevents inaccuracies caused by noisy depth measurements. The uncertainty in the image depth is not only related to the camera device, but also the environment; hence, a novel uncertainty model for depth measurements was developed using Gaussian mixture applied to multi-windows. The plane features in the indoor environment contain valuable information about the global structure, which can guide the convergence of camera pose solutions, and plane and feature point constraints are incorporated in the proposed optimization framework. The proposed method was validated using publicly-available RGB-D benchmarks and obtained good quality trajectory and 3D models, which are difficult for traditional 3D reconstruction algorithms.

  8. A universal multiprocessor system for the fast acquisition and processing of positron camera data

    International Nuclear Information System (INIS)

    Deluigi, B.

    1982-01-01

    In this study the main components of a suitable detection system were worked out, and their properties were examined. For the measurement of the three-dimensional distribution of radiopharmaka marked by positron emitters in animal-experimental studies first a positron camera was constructed. For the detection of the annihilation quanta serve two opposite lying position-sensitive gamma detectors which are derived in coincidence. Two commercial camera heads working according to the Anger principle were reconstructed for these purposes and switched together by a special interface to the positron camera. By this arrangement a spatial resolution of 0.8 cm FWHM for a line source in the symmetry plane and a coincidence resolution time 2T of 16ns FW0.1M was reached. For the three-dimensional image reconstruction with the data of a positron camera a maximum-likelihood procedure was developed and tested by a Monte Carlo procedure. In view of this application an at most flexible multi-microprocessor system was developed. A high computing capacity is reached owing to the fact that several partial problems are distributed to different processors and are processed parallely. The architecture was so scheduled that the system possesses a high error tolerance and that the computing capacity can be extended without a principal limit. (orig./HSI) [de

  9. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  10. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  11. Ion movie camera for particle-beam-fusion experiments

    International Nuclear Information System (INIS)

    Stygar, W.A.; Mix, L.P.; Leeper, R.J.; Maenchen, J.; Wenger, D.F.; Mattson, C.R.; Muron, D.J.

    1992-01-01

    A camera with a 3 ns time resolution and a continuous (>100 ns) record length has been developed to image a 10 12 --10 13 W/cm 2 ion beam for inertial-confinement-fusion experiments. A thin gold Rutherford-scattering foil placed in the path of the beam scatters ions into the camera. The foil is in a near-optimized scattering geometry and reduces the beam intensity∼seven orders of magnitude. The scattered ions are pinhole imaged onto a 2D array of 39 p-i-n diode detectors; outputs are recorded on LeCroy 6880 transient-waveform digitizers. The waveforms are analyzed and combined to produce a 39-pixel movie which can be displayed on an image processor to provide time-resolved horizontal- and vertical-focusing information

  12. Automatic Camera Orientation and Structure Recovery with Samantha

    Science.gov (United States)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  13. 3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform

    Science.gov (United States)

    Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul

    2018-03-01

    This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.

  14. Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging

    International Nuclear Information System (INIS)

    Virador, Patrick R.G.

    2000-01-01

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data

  15. Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Virador, Patrick R.G. [Univ. of California, Berkeley, CA (United States)

    2000-04-01

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data

  16. Using refraction in thick glass plates for optical path length modulation in low coherence interferometry.

    Science.gov (United States)

    Kröger, Niklas; Schlobohm, Jochen; Pösch, Andreas; Reithmeier, Eduard

    2017-09-01

    In Michelson interferometer setups the standard way to generate different optical path lengths between a measurement arm and a reference arm relies on expensive high precision linear stages such as piezo actuators. We present an alternative approach based on the refraction of light at optical interfaces using a cheap stepper motor with high gearing ratio to control the rotation of a glass plate. The beam path is examined and a relation between angle of rotation and change in optical path length is devised. As verification, an experimental setup is presented, and reconstruction results from a measurement standard are shown. The reconstructed step height from this setup lies within 1.25% of the expected value.

  17. Three-dimensional Reconstruction of Dust Particle Trajectories in the NSTX

    International Nuclear Information System (INIS)

    Boeglin, W.U.; Roquemore, A.L.; Maqueda, R.

    2009-01-01

    Highly mobile incandescent dust particles are routinely observed on NSTX using two fast cameras operating in the visible region. An analysis method to reconstruct dust particle trajectories in space using two fast cameras is presented in this paper. Position accuracies of a few millimeters depending on the particle's location have been achieved and particle velocities between 10 and 200 m/s have been observed

  18. Novel, full 3D scintillation dosimetry using a static plenoptic camera

    Science.gov (United States)

    Goulet, Mathieu; Rilling, Madison; Gingras, Luc; Beddar, Sam; Beaulieu, Luc; Archambault, Louis

    2014-01-01

    Purpose: Patient-specific quality assurance (QA) of dynamic radiotherapy delivery would gain from being performed using a 3D dosimeter. However, 3D dosimeters, such as gels, have many disadvantages limiting to quality assurance, such as tedious read-out procedures and poor reproducibility. The purpose of this work is to develop and validate a novel type of high resolution 3D dosimeter based on the real-time light acquisition of a plastic scintillator volume using a plenoptic camera. This dosimeter would allow for the QA of dynamic radiation therapy techniques such as intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). Methods: A Raytrix R5 plenoptic camera was used to image a 10 × 10 × 10 cm3 EJ-260 plastic scintillator embedded inside an acrylic phantom at a rate of one acquisition per second. The scintillator volume was irradiated with both an IMRT and VMAT treatment plan on a Clinac iX linear accelerator. The 3D light distribution emitted by the scintillator volume was reconstructed at a 2 mm resolution in all dimensions by back-projecting the light collected by each pixel of the light-field camera using an iterative reconstruction algorithm. The latter was constrained by a beam's eye view projection of the incident dose acquired using the portal imager integrated with the linac and by physical consideration of the dose behavior as a function of depth in the phantom. Results: The absolute dose difference between the reconstructed 3D dose and the expected dose calculated using the treatment planning software Pinnacle3 was on average below 1.5% of the maximum dose for both integrated IMRT and VMAT deliveries, and below 3% for each individual IMRT incidences. Dose agreement between the reconstructed 3D dose and a radiochromic film acquisition in the same experimental phantom was on average within 2.1% and 1.2% of the maximum recorded dose for the IMRT and VMAT delivery, respectively. Conclusions: Using plenoptic camera

  19. System Architecture of the Dark Energy Survey Camera Readout Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Theresa; /FERMILAB; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; /Barcelona, IFAE; Chappa, Steve; /Fermilab; de Vicente, Juan; /Madrid, CIEMAT; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; /Fermilab; Martinez, Gustavo; /Madrid, CIEMAT; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  20. Reconstruction dynamics of recorded holograms in photochromic glass

    Energy Technology Data Exchange (ETDEWEB)

    Mihailescu, Mona; Pavel, Eugen; Nicolae, Vasile B.

    2011-06-20

    We have investigated the dynamics of the record-erase process of holograms in photochromic glass using continuum Nd:YVO{sub 4} laser radiation ({lambda}=532 nm). A bidimensional microgrid pattern was formed and visualized in photochromic glass, and its diffraction efficiency decay versus time (during reconstruction step) gave us information (D, {Delta}n) about the diffusion process inside the material. The recording and reconstruction processes were carried out in an off-axis setup, and the images of the reconstructed object were recorded by a CCD camera. Measurements realized on reconstructed object images using holograms recorded at a different incident power laser have shown a two-stage process involved in silver atom kinetics.

  1. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  2. Radiometric calibration of wide-field camera system with an application in astronomy

    Science.gov (United States)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  3. 3D Point Cloud Reconstruction from Single Plenoptic Image

    Directory of Open Access Journals (Sweden)

    F. Murgia

    2016-06-01

    Full Text Available Novel plenoptic cameras sample the light field crossing the main camera lens. The information available in a plenoptic image must be processed, in order to create the depth map of the scene from a single camera shot. In this paper a novel algorithm, for the reconstruction of 3D point cloud of the scene from a single plenoptic image, taken with a consumer plenoptic camera, is proposed. Experimental analysis is conducted on several test images, and results are compared with state of the art methodologies. The results are very promising, as the quality of the 3D point cloud from plenoptic image, is comparable with the quality obtained with current non-plenoptic methodologies, that necessitate more than one image.

  4. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    Directory of Open Access Journals (Sweden)

    Kelly de Jesus

    2015-01-01

    Full Text Available This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.. Root Mean Square (RMS error with homography of control and validations points was lower than without it for surface and underwater cameras (P≤0.03. With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P≥0.47. Without homography, RMS error of control points was greater for underwater than surface cameras (P≤0.04 and the opposite was observed for validation points (P≤0.04. It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy.

  5. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  6. AUTOMATIC CAMERA ORIENTATION AND STRUCTURE RECOVERY WITH SAMANTHA

    Directory of Open Access Journals (Sweden)

    R. Gherardi

    2012-09-01

    Full Text Available SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  7. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  8. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  9. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  10. CHALLENGES IN FLYING QUADROTOR UNMANNED AERIAL VEHICLE FOR 3D INDOOR RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    J. Yan

    2017-09-01

    Full Text Available Three-dimensional modelling plays a vital role in indoor 3D tracking, navigation, guidance and emergency evacuation. Reconstruction of indoor 3D models is still problematic, in part, because indoor spaces provide challenges less-documented than their outdoor counterparts. Challenges include obstacles curtailing image and point cloud capture, restricted accessibility and a wide array of indoor objects, each with unique semantics. Reconstruction of indoor environments can be achieved through a photogrammetric approach, e.g. by using image frames, aligned using recurring corresponding image points (CIP to build coloured point clouds. Our experiments were conducted by flying a QUAV in three indoor environments and later reconstructing 3D models which were analysed under different conditions. Point clouds and meshes were created using Agisoft PhotoScan Professional. We concentrated on flight paths from two vantage points: 1 safety and security while flying indoors and 2 data collection needed for reconstruction of 3D models. We surmised that the main challenges in providing safe flight paths are related to the physical configuration of indoor environments, privacy issues, the presence of people and light conditions. We observed that the quality of recorded video used for 3D reconstruction has a high dependency on surface materials, wall textures and object types being reconstructed. Our results show that 3D indoor reconstruction predicated on video capture using a QUAV is indeed feasible, but close attention should be paid to flight paths and conditions ultimately influencing the quality of 3D models. Moreover, it should be decided in advance which objects need to be reconstructed, e.g. bare rooms or detailed furniture.

  11. Challenges in Flying Quadrotor Unmanned Aerial Vehicle for 3d Indoor Reconstruction

    Science.gov (United States)

    Yan, J.; Grasso, N.; Zlatanova, S.; Braggaar, R. C.; Marx, D. B.

    2017-09-01

    Three-dimensional modelling plays a vital role in indoor 3D tracking, navigation, guidance and emergency evacuation. Reconstruction of indoor 3D models is still problematic, in part, because indoor spaces provide challenges less-documented than their outdoor counterparts. Challenges include obstacles curtailing image and point cloud capture, restricted accessibility and a wide array of indoor objects, each with unique semantics. Reconstruction of indoor environments can be achieved through a photogrammetric approach, e.g. by using image frames, aligned using recurring corresponding image points (CIP) to build coloured point clouds. Our experiments were conducted by flying a QUAV in three indoor environments and later reconstructing 3D models which were analysed under different conditions. Point clouds and meshes were created using Agisoft PhotoScan Professional. We concentrated on flight paths from two vantage points: 1) safety and security while flying indoors and 2) data collection needed for reconstruction of 3D models. We surmised that the main challenges in providing safe flight paths are related to the physical configuration of indoor environments, privacy issues, the presence of people and light conditions. We observed that the quality of recorded video used for 3D reconstruction has a high dependency on surface materials, wall textures and object types being reconstructed. Our results show that 3D indoor reconstruction predicated on video capture using a QUAV is indeed feasible, but close attention should be paid to flight paths and conditions ultimately influencing the quality of 3D models. Moreover, it should be decided in advance which objects need to be reconstructed, e.g. bare rooms or detailed furniture.

  12. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  13. Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction.

    Science.gov (United States)

    Lefloch, Damien; Kluge, Markus; Sarbolandi, Hamed; Weyrich, Tim; Kolb, Andreas

    2017-12-01

    Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole.

  14. Live event reconstruction in an optically read out GEM-based TPC

    Science.gov (United States)

    Brunbauer, F. M.; Galgóczi, G.; Gonzalez Diaz, D.; Oliveri, E.; Resnati, F.; Ropelewski, L.; Streli, C.; Thuiner, P.; van Stenis, M.

    2018-04-01

    Combining strong signal amplification made possible by Gaseous Electron Multipliers (GEMs) with the high spatial resolution provided by optical readout, highly performing radiation detectors can be realized. An optically read out GEM-based Time Projection Chamber (TPC) is presented. The device permits 3D track reconstruction by combining the 2D projections obtained with a CCD camera with timing information from a photomultiplier tube. Owing to the intuitive 2D representation of the tracks in the images and to automated control, data acquisition and event reconstruction algorithms, the optically read out TPC permits live display of reconstructed tracks in three dimensions. An Ar/CF4 (80/20%) gas mixture was used to maximize scintillation yield in the visible wavelength region matching the quantum efficiency of the camera. The device is integrated in a UHV-grade vessel allowing for precise control of the gas composition and purity. Long term studies in sealed mode operation revealed a minor decrease in the scintillation light intensity.

  15. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  16. Pseudo real-time coded aperture imaging system with intensified vidicon cameras

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.

    1977-01-01

    A coded image displayed on a TV monitor was used to directly reconstruct a decoded image. Both the coded and the decoded images were viewed with intensified vidicon cameras. The coded aperture was a 15-element nonredundant pinhole array. The coding and decoding were accomplished simultaneously during the scanning of a single 16-msec TV frame

  17. A triple GEM gamma camera for medical application

    Energy Technology Data Exchange (ETDEWEB)

    Anulli, F. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Balla, A. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Bencivenni, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Corradi, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); D' Ambrosio, C. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Domenici, D. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Felici, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Gatta, M. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Morone, M.C. [Dipartimento di Biopatologia e Diagnostica per immagini, Universita di Roma Tor Vergata (Italy); INFN - Sezione di Roma Tor Vergata (Italy); Murtas, F. [Laboratori Nazionali di Frascati INFN, Frascati (Italy)]. E-mail: fabrizio.murtas@lnf.infn.it; Schillaci, O. [Dipartimento di Biopatologia e Diagnostica per immagini, Universita di Roma Tor Vergata (Italy)

    2007-03-01

    A Gamma Camera for medical applications 10x10cm{sup 2} has been built using a triple GEM chamber prototype. The photon converters placed in front of the three GEM foils, has been realized with different technologies. The chamber, High Voltage supplied with a new active divider made in Frascati, is readout through 64 pads, 1mm{sup 2} wide, organized in a row of 8cm long, with LHCb ASDQ chip. This Gamma Camera can be used both for X-ray movie and PET-SPECT imaging; this chamber prototype is placed in a scanner system, creating images of 8x8cm{sup 2}. Several measurements have been performed using phantom and radioactive sources of Tc99m(140keV) and Na22(511keV). Results on spatial resolution and image reconstruction are presented.

  18. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  19. Tomographic reconstruction of OH* chemiluminescence in two interacting turbulent flames

    International Nuclear Information System (INIS)

    Worth, Nicholas A; Dawson, James R

    2013-01-01

    The tomographic reconstruction of OH* chemiluminescence was performed on two interacting turbulent premixed bluff-body stabilized flames under steady flow conditions and acoustic excitation. These measurements elucidate the complex three-dimensional (3D) vortex–flame interactions which have previously not been accessible. The experiment was performed using a single camera and intensifier, with multiple views acquired by repositioning the camera, permitting calculation of the mean and phase-averaged volumetric OH* distributions. The reconstructed flame structure and phase-averaged dynamics are compared with OH planar laser-induced fluorescence and flame surface density measurements for the first time. The volumetric data revealed that the large-scale vortex–flame structures formed along the shear layers of each flame collide when the two flames meet, resulting in complex 3D flame structures in between the two flames. With a fairly simple experimental setup, it is shown that the tomographic reconstruction of OH* chemiluminescence in forced flames is a powerful tool that can yield important physical insights into large-scale 3D flame dynamics that are important in combustion instability. (paper)

  20. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    Energy Technology Data Exchange (ETDEWEB)

    Halama, J. [Loyola Univ. Medical Center (United States)

    2016-06-15

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Be able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images

  1. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    International Nuclear Information System (INIS)

    Halama, J.

    2016-01-01

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Be able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images

  2. A complete system for 3D reconstruction of roots for phenotypic analysis.

    Science.gov (United States)

    Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J

    2015-01-01

    Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.

  3. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  4. A simple data loss model for positron camera systems

    International Nuclear Information System (INIS)

    Eriksson, L.; Dahlbom, M.

    1994-01-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system

  5. Structured Light-Based 3D Reconstruction System for Plants

    OpenAIRE

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud regi...

  6. An ebCMOS camera system for marine bioluminescence observation: The LuSEApher prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dominjon, A., E-mail: a.dominjon@ipnl.in2p3.fr [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Ageron, M. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Billault, M.; Brunner, J. [CNRS/IN2P3, Centre de Physique des Particules de Marseille, Marseille, F-13288 (France); Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Calabria, P. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Chabanat, E. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France); Universite de Lyon, Universite Lyon 1, Lyon F-69003 (France); Chaize, D.; Doan, Q.T.; Guerin, C.; Houles, J.; Vagneron, L. [CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Villeurbanne F-69622 (France)

    2012-12-11

    The ebCMOS camera, called LuSEApher, is a marine bioluminescence recorder device adapted to extreme low light level. This prototype is based on the skeleton of the LUSIPHER camera system originally developed for fluorescence imaging. It has been installed at 2500 m depth off the Mediterranean shore on the site of the ANTARES neutrino telescope. The LuSEApher camera is mounted on the Instrumented Interface Module connected to the ANTARES network for environmental science purposes (European Seas Observatory Network). The LuSEApher is a self-triggered photo detection system with photon counting ability. The presentation of the device is given and its performances such as the single photon reconstruction, noise performances and trigger strategy are presented. The first recorded movies of bioluminescence are analyzed. To our knowledge, those types of events have never been obtained with such a sensitivity and such a frame rate. We believe that this camera concept could open a new window on bioluminescence studies in the deep sea.

  7. Simple method of modelling of digital holograms registering and their optical reconstruction

    International Nuclear Information System (INIS)

    Evtikhiev, N N; Cheremkhin, P A; Krasnov, V V; Kurbatova, E A; Molodtsov, D Yu; Porshneva, L A; Rodin, V G

    2016-01-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted. (paper)

  8. Be Foil ''Filter Knee Imaging'' NSTX Plasma with Fast Soft X-ray Camera

    International Nuclear Information System (INIS)

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-01-01

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28 o ) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip

  9. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations

  10. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  11. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  12. Digital reconstruction of Young's fringes using Fresnel transformation

    Science.gov (United States)

    Kulenovic, Rudi; Song, Yaozu; Renninger, P.; Groll, Manfred

    1997-11-01

    This paper deals with the digital numerical reconstruction of Young's fringes from laser speckle photography by means of the Fresnel-transformation. The physical model of the optical reconstruction of a specklegram is a near-field Fresnel-diffraction phenomenon which can be mathematically described by the Fresnel-transformation. Therefore, the interference phenomena can be directly calculated by a microcomputer.If additional a CCD-camera is used for specklegram recording the measurement procedure and evaluation process can be completely carried out in a digital way. Compared with conventional laser speckle photography no holographic plates, no wet development process and no optical specklegram reconstruction are needed. These advantages reveal a wide future in scientific and engineering applications. The basic principle of the numerical reconstruction is described, the effects of experimental parameters of Young's fringes are analyzed and representative results are presented.

  13. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    International Nuclear Information System (INIS)

    Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-01-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with

  14. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-08-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0

  15. Oblique Multi-Camera Systems - Orientation and Dense Matching Issues

    Science.gov (United States)

    Rupnik, E.; Nex, F.; Remondino, F.

    2014-03-01

    The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.). The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  16. Scattered and Fluorescent Photon Track Reconstruction in a Biological Tissue

    Directory of Open Access Journals (Sweden)

    Maria N. Kholodtsova

    2014-01-01

    Full Text Available Appropriate analysis of biological tissue deep regions is important for tumor targeting. This paper is concentrated on photons’ paths analysis in such biotissue as brain, because optical probing depth of fluorescent and excitation radiation differs. A method for photon track reconstruction was developed. Images were captured focusing on the transparent wall close and parallel to the source fibres, placed in brain tissue phantoms. The images were processed to reconstruct the photons most probable paths between two fibres. Results were compared with Monte Carlo simulations and diffusion approximation of the radiative transfer equation. It was shown that the excitation radiation optical probing depth is twice more than for the fluorescent photons. The way of fluorescent radiation spreading was discussed. Because of fluorescent and excitation radiation spreads in different ways, and the effective anisotropy factor, geff, was proposed for fluorescent radiation. For the brain tissue phantoms it were found to be 0.62±0.05 and 0.66±0.05 for the irradiation wavelengths 532 nm and 632.8 nm, respectively. These calculations give more accurate information about the tumor location in biotissue. Reconstruction of photon paths allows fluorescent and excitation probing depths determination. The geff can be used as simplified parameter for calculations of fluorescence probing depth.

  17. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  18. Precise shape reconstruction by active pattern in total-internal-reflection-based tactile sensor.

    Science.gov (United States)

    Saga, Satoshi; Taira, Ryosuke; Deguchi, Koichiro

    2014-03-01

    We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.

  19. First Test Of A New High Resolution Positron Camera With Four Area Detectors

    Science.gov (United States)

    van Laethem, E.; Kuijk, M.; Deconinck, Frank; van Miert, M.; Defrise, Michel; Townsend, D.; Wensveen, M.

    1989-10-01

    A PET camera consisting of two pairs of parallel area detectors has been installed at the cyclotron unit of VUB. The detectors are High Density Avalanche Chambers (HIDAC) wire-chambers with a stack of 4 or 6 lead gamma-electron converters, the sensitive area being 30 by 30 cm. The detectors are mounted on a commercial gantry allowing a 180 degree rotation during acquisition, as needed for a fully 3D image reconstruction. The camera has been interfaced to a token-ring computer network consisting of 5 workstations among which the various tasks (acquisition, reconstruction, display) can be distributed. Each coincident event is coded in 48 bits and is transmitted to the computer bus via a 512 kbytes dual ported buffer memory allowing data rates of up to 50 kHz. Fully 3D image reconstruction software has been developed, and includes new reconstruction algorithms allowing a better utilization of the available projection data. Preliminary measurements and imaging of phantoms and small animals (with 18FDG) have been performed with two of the four detectors mounted on the gantry. They indicate the expected 3D isotropic spatial resolution of 3.5 mm (FWHM, line source in air) and a sensitivity of 4 cps/μCi for a centred point source in air, corresponding to typical data rates of a few kHz. This latter figure is expected to improve by a factor of 4 after coupling of the second detector pair, since the coincidence sensitivity of this second detector pair is a factor 3 higher than that of the first one.

  20. Counting neutrons with a commercial S-CMOS camera

    Science.gov (United States)

    Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega

    2018-01-01

    It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.

  1. Counting neutrons with a commercial S-CMOS camera

    Directory of Open Access Journals (Sweden)

    Patrick Van Esch

    2018-01-01

    Full Text Available It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable “neutron impact” data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has

  2. Gamma-ray detection and Compton camera image reconstruction with application to hadron therapy

    International Nuclear Information System (INIS)

    Frandes, M.

    2010-09-01

    A novel technique for radiotherapy - hadron therapy - irradiates tumors using a beam of protons or carbon ions. Hadron therapy is an effective technique for cancer treatment, since it enables accurate dose deposition due to the existence of a Bragg peak at the end of particles range. Precise knowledge of the fall-off position of the dose with millimeters accuracy is critical since hadron therapy proved its efficiency in case of tumors which are deep-seated, close to vital organs, or radio-resistant. A major challenge for hadron therapy is the quality assurance of dose delivery during irradiation. Current systems applying positron emission tomography (PET) technologies exploit gamma rays from the annihilation of positrons emitted during the beta decay of radioactive isotopes. However, the generated PET images allow only post-therapy information about the deposed dose. In addition, they are not in direct coincidence with the Bragg peak. A solution is to image the complete spectrum of the emitted gamma rays, including nuclear gamma rays emitted by inelastic interactions of hadrons to generated nuclei. This emission is isotropic, and has a spectrum ranging from 100 keV up to 20 MeV. However, the measurement of these energetic gamma rays from nuclear reactions exceeds the capability of all existing medical imaging systems. An advanced Compton scattering detection method with electron tracking capability is proposed, and modeled to reconstruct the high-energy gamma-ray events. This Compton detection technique was initially developed to observe gamma rays for astrophysical purposes. A device illustrating the method was designed and adapted to Hadron Therapy Imaging (HTI). It consists of two main sub-systems: a tracker where Compton recoiled electrons are measured, and a calorimeter where the scattered gamma rays are absorbed via the photoelectric effect. Considering a hadron therapy scenario, the analysis of generated data was performed, passing trough the complete

  3. Graph reconstruction with a betweenness oracle

    DEFF Research Database (Denmark)

    Abrahamsen, Mikkel; Bodwin, Greg; Rotenberg, Eva

    2016-01-01

    Graph reconstruction algorithms seek to learn a hidden graph by repeatedly querying a blackbox oracle for information about the graph structure. Perhaps the most well studied and applied version of the problem uses a distance oracle, which can report the shortest path distance between any pair...... of nodes. We introduce and study the betweenness oracle, where bet(a, m, z) is true iff m lies on a shortest path between a and z. This oracle is strictly weaker than a distance oracle, in the sense that a betweenness query can be simulated by a constant number of distance queries, but not vice versa...

  4. Measurement of defects on the wall by use of the inclination angle of laser slit beam and position tracking algorithm of camera

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Hwan; Yoon, Ji Sup; Jung, Jae Hoo; Hong, Dong Hee; Park, Gee Yong

    2001-01-01

    In this paper, a method of measuring the size of defects on the wall and restructuring the defect image is proposed based on the estimation algorithm of a camera orientation which uses the declination angle of the line slit beam. To reconstruct the image, an algorithm of estimating the horizontally inclined angle of CCD camera is presented. This algorithm adopts a 3-dimensional coordinate transformation of the image plane where both the LASER beam and the original image of the defects exist. The estimation equation is obtained by using the information of the beam projected on the wall and the parameters of this equation are experimentally obtained. With this algorithm, the original image of the defect can be reconstructed into the image which is obtained by a camera normal to the wall. From the result of a series of experiment shows that the measuring accuracy of the defect is within 0.5% error bound of real defect size under 30 degree of the horizontally inclined angle. Also, the accuracy is deteriorates with the error rate of 1% for every 10 degree increase of the horizontally inclined angle. The estimation error increases in the range of 30{approx}50 degree due to the existence of dead zone of defect depth, and defect length can not be measured due to the disappearance of image data above 70 degree. In case of under water condition, the measuring accuracy is also influenced due to the changed field of view of both the camera and the laser slit beam caused by the refraction rate in the water. The proposed algorithm provides the method of reconstructing the image taken at any arbitrary camera orientation into the image which is obtained by a camera normal to the wall and thus it enables the accurate measurement of the defect lengths only by using a single camera and a laser slit beam.

  5. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  6. Gamma-ray detection and Compton camera image reconstruction with application to hadron therapy; Detection des rayons gamma et reconstruction d'images pour la camera Compton: Application a l'hadrontherapie

    Energy Technology Data Exchange (ETDEWEB)

    Frandes, M.

    2010-09-15

    A novel technique for radiotherapy - hadron therapy - irradiates tumors using a beam of protons or carbon ions. Hadron therapy is an effective technique for cancer treatment, since it enables accurate dose deposition due to the existence of a Bragg peak at the end of particles range. Precise knowledge of the fall-off position of the dose with millimeters accuracy is critical since hadron therapy proved its efficiency in case of tumors which are deep-seated, close to vital organs, or radio-resistant. A major challenge for hadron therapy is the quality assurance of dose delivery during irradiation. Current systems applying positron emission tomography (PET) technologies exploit gamma rays from the annihilation of positrons emitted during the beta decay of radioactive isotopes. However, the generated PET images allow only post-therapy information about the deposed dose. In addition, they are not in direct coincidence with the Bragg peak. A solution is to image the complete spectrum of the emitted gamma rays, including nuclear gamma rays emitted by inelastic interactions of hadrons to generated nuclei. This emission is isotropic, and has a spectrum ranging from 100 keV up to 20 MeV. However, the measurement of these energetic gamma rays from nuclear reactions exceeds the capability of all existing medical imaging systems. An advanced Compton scattering detection method with electron tracking capability is proposed, and modeled to reconstruct the high-energy gamma-ray events. This Compton detection technique was initially developed to observe gamma rays for astrophysical purposes. A device illustrating the method was designed and adapted to Hadron Therapy Imaging (HTI). It consists of two main sub-systems: a tracker where Compton recoiled electrons are measured, and a calorimeter where the scattered gamma rays are absorbed via the photoelectric effect. Considering a hadron therapy scenario, the analysis of generated data was performed, passing trough the complete

  7. First demonstration of real-time gamma imaging by using a handheld Compton camera for particle therapy

    Energy Technology Data Exchange (ETDEWEB)

    Taya, T., E-mail: taka48138@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Kataoka, J.; Kishimoto, A.; Iwamoto, Y.; Koide, A. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Nishio, T. [Graduate School of Biomedical and Health Science, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima-shi, Hiroshima (Japan); Kabuki, S. [School of Medicine, Tokai University, 143 Shimokasuya, Isehara-shi, Kanagawa (Japan); Inaniwa, T. [National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba-shi, Chiba (Japan)

    2016-09-21

    The use of real-time gamma imaging for cancer treatment in particle therapy is expected to improve the accuracy of the treatment beam delivery. In this study, we demonstrated the imaging of gamma rays generated by the nuclear interactions during proton irradiation, using a handheld Compton camera (14 cm×15 cm×16 cm, 2.5 kg) based on scintillation detectors. The angular resolution of this Compton camera is ∼8° at full width at half maximum (FWHM) for a {sup 137}Cs source. We measured the energy spectra of the gamma rays using a LaBr{sub 3}(Ce) scintillator and photomultiplier tube, and using the handheld Compton camera, performed image reconstruction when using a 70 MeV proton beam to irradiate a water, Ca(OH){sub 2}, and polymethyl methacrylate (PMMA) phantom. In the energy spectra of all three phantoms, we found an obvious peak at 511 keV, which was derived from annihilation gamma rays, and in the energy spectrum of the PMMA phantom, we found another peak at 718 keV, which contains some of the prompt gamma rays produced from {sup 10}B. Therefore, we evaluated the peak positions of the projection from the reconstructed images of the PMMA phantom. The differences between the peak positions and the Bragg peak position calculated using simulation are 7 mm±2 mm and 3 mm±8 mm, respectively. Although we could quickly acquire online gamma imaging of both of the energy ranges during proton irradiation, we cannot arrive at a clear conclusion that prompt gamma rays sufficiently trace the Bragg peak from these results because of the uncertainty given by the spatial resolution of the Compton camera. We will develop a high-resolution Compton camera in the near future for further study. - Highlights: • Gamma imaging during proton irradiation by a handheld Compton camera is demonstrated. • We were able to acquire the online gamma-ray images quickly. • We are developing a high resolution Compton camera for range verification.

  8. Pose and Shape Reconstruction of a Noncooperative Spacecraft Using Camera and Range Measurements

    Directory of Open Access Journals (Sweden)

    Renato Volpe

    2017-01-01

    Full Text Available Recent interest in on-orbit proximity operations has pushed towards the development of autonomous GNC strategies. In this sense, optical navigation enables a wide variety of possibilities as it can provide information not only about the kinematic state but also about the shape of the observed object. Various mission architectures have been either tested in space or studied on Earth. The present study deals with on-orbit relative pose and shape estimation with the use of a monocular camera and a distance sensor. The goal is to develop a filter which estimates an observed satellite’s relative position, velocity, attitude, and angular velocity, along with its shape, with the measurements obtained by a camera and a distance sensor mounted on board a chaser which is on a relative trajectory around the target. The filter’s efficiency is proved with a simulation on a virtual target object. The results of the simulation, even though relevant to a simplified scenario, show that the estimation process is successful and can be considered a promising strategy for a correct and safe docking maneuver.

  9. A low-count reconstruction algorithm for Compton-based prompt gamma imaging

    Science.gov (United States)

    Huang, Hsuan-Ming; Liu, Chih-Chieh; Jan, Meei-Ling; Lee, Ming-Wei

    2018-04-01

    The Compton camera is an imaging device which has been proposed to detect prompt gammas (PGs) produced by proton–nuclear interactions within tissue during proton beam irradiation. Compton-based PG imaging has been developed to verify proton ranges because PG rays, particularly characteristic ones, have strong correlations with the distribution of the proton dose. However, accurate image reconstruction from characteristic PGs is challenging because the detector efficiency and resolution are generally low. Our previous study showed that point spread functions can be incorporated into the reconstruction process to improve image resolution. In this study, we proposed a low-count reconstruction algorithm to improve the image quality of a characteristic PG emission by pooling information from other characteristic PG emissions. PGs were simulated from a proton beam irradiated on a water phantom, and a two-stage Compton camera was used for PG detection. The results show that the image quality of the reconstructed characteristic PG emission is improved with our proposed method in contrast to the standard reconstruction method using events from only one characteristic PG emission. For the 4.44 MeV PG rays, both methods can be used to predict the positions of the peak and the distal falloff with a mean accuracy of 2 mm. Moreover, only the proposed method can improve the estimated positions of the peak and the distal falloff of 5.25 MeV PG rays, and a mean accuracy of 2 mm can be reached.

  10. Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras

    OpenAIRE

    Mukaigawa, Yasuhiro; Genda, Daisuke; Yamane, Ryo; Shakunaga, Takeshi

    2003-01-01

    A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the...

  11. On the use of holographic interferometry in deformation analysis with additional degrees of freedom during the reconstruction

    International Nuclear Information System (INIS)

    Schumann, W.; Dubas, M.

    1978-01-01

    In holographic interferometry with two reconstructed wavefronts, additional flexibility is obtained when the fringes may be modified. This is achieved during the reconstruction by moving part of the optical arrangement. First, the equation relating the optical path difference to the displacement and the fringe control parameters and, secondly, those relating the derivative of the optical path difference to the strain and the rotation are worked out and their practical use is discussed. (orig.) [de

  12. Imaging of turbulent structures and tomographic reconstruction of TORPEX plasma emissivity

    International Nuclear Information System (INIS)

    Iraji, D.; Furno, I.; Fasoli, A.; Theiler, C.

    2010-01-01

    In the TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], a simple magnetized plasma device, low frequency electrostatic fluctuations associated with interchange waves, are routinely measured by means of extensive sets of Langmuir probes. To complement the electrostatic probe measurements of plasma turbulence and study of plasma structures smaller than the spatial resolution of probes array, a nonperturbative direct imaging system has been developed on TORPEX, including a fast framing Photron-APX-RS camera and an image intensifier unit. From the line-integrated camera images, we compute the poloidal emissivity profile of the plasma by applying a tomographic reconstruction technique using a pixel method and solving an overdetermined set of equations by singular value decomposition. This allows comparing statistical, spectral, and spatial properties of visible light radiation with electrostatic fluctuations. The shape and position of the time-averaged reconstructed plasma emissivity are observed to be similar to those of the ion saturation current profile. In the core plasma, excluding the electron cyclotron and upper hybrid resonant layers, the mean value of the plasma emissivity is observed to vary with (T e ) α (n e ) β , in which α=0.25-0.7 and β=0.8-1.4, in agreement with collisional radiative model. The tomographic reconstruction is applied to the fast camera movie acquired with 50 kframes/s rate and 2 μs of exposure time to obtain the temporal evolutions of the emissivity fluctuations. Conditional average sampling is also applied to visualize and measure sizes of structures associated with the interchange mode. The ω-time and the two-dimensional k-space Fourier analysis of the reconstructed emissivity fluctuations show the same interchange mode that is detected in the ω and k spectra of the ion saturation current fluctuations measured by probes. Small scale turbulent plasma structures can be detected and tracked in the reconstructed emissivity

  13. Compensation of spatial system response in SPECT with conjugate gradient reconstruction technique

    International Nuclear Information System (INIS)

    Formiconi, A.R.; Pupi, A.; Passeri, A.

    1989-01-01

    A procedure for determination of the system matrix in single photon emission tomography (SPECT) is described which use a conjugate gradient reconstruction technique to take into account the variable system resolution of a camera equipped with parallel-hole collimators. The procedure involves acquisition of system line spread functions (LSF) in the region occupied by the object studied. Those data are used to generate a set of weighting factors based on the assumption that the LSFs of the collimated camera are of Gaussian shape with full width at half maximum (FWHM) linearly dependent on source depth in the span of image space. Factors are stored on a disc file for subsequent use in reconstruction. Afterwards reconstruction is performed using the conjugate gradient method with the system matrix modified by incorporation of these precalculated factors to take into account variable geometrical system response. The set of weighting factors is regenerated whenever acquisition conditions are changed (collimator, radius of rotation) with an ultra high resolution (UHR) collimator 2000 weighting factors need to be calculated. (author)

  14. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    Directory of Open Access Journals (Sweden)

    Yajie Liao

    2017-06-01

    Full Text Available Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices, which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.

  15. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  16. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-06-01

    Full Text Available For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP system combining Multi-View Stereovision (MVS with the Structure from Motion (SfM algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98 and 0.57 mm (R2 = 0.99, respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency.

  17. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Science.gov (United States)

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  18. The neutron small-angle camera D11 at the high-flux reactor, Grenoble

    International Nuclear Information System (INIS)

    Ibel, K.

    1976-01-01

    The neutron small-angle scattering system at the high-flux reactor in Grenoble consists of three major parts: the supply of cold neutrons via bent neutron guides; the small-angle camera D11; and the data handling facilities. The camera D11 has an overall length of 80 m. The effective length of the camera is variable. The full length of the collimator before the fixed sample position can be reduced by movable neutron guides; the second flight path of 40 m full length contains detector sites in various positions. Thus a large range of momentum transfers can be used with the same relative resolution. Scattering angles between 5 x 10 -4 and 0.5 rad and neutron wavelengths from 0.2 to 2.0 nm are available. A large-area position-sensitive detector is used which allows simultaneous recording of intensities scattered at different angles; it is a multiwire proportional chamber. 3808 elements of 1 cm 2 are arranged in a two-dimensional matrix. (Auth.)

  19. Error Evaluation in a Stereovision-Based 3D Reconstruction System

    Directory of Open Access Journals (Sweden)

    Kohler Sophie

    2010-01-01

    Full Text Available The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill a priori defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

  20. Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras

    Science.gov (United States)

    Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich

    2018-01-01

    A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.

  1. Spiral scan long object reconstruction through PI line reconstruction

    International Nuclear Information System (INIS)

    Tam, K C; Hu, J; Sourbelle, K

    2004-01-01

    The response of a point object in a cone beam (CB) spiral scan is analysed. Based on the result, a reconstruction algorithm for long object imaging in spiral scan cone beam CT is developed. A region-of-interest (ROI) of the long object is scanned with a detector smaller than the ROI, and a portion of it can be reconstructed without contamination from overlaying materials. The top and bottom surfaces of the ROI are defined by two sets of PI lines near the two ends of the spiral path. With this novel definition of the top and bottom ROI surfaces and through the use of projective geometry, it is straightforward to partition the cone beam image into regions corresponding to projections of the ROI, the overlaying objects or both. This also simplifies computation at source positions near the spiral ends, and makes it possible to reduce radiation exposure near the spiral ends substantially through simple hardware collimation. Simulation results to validate the algorithm are presented

  2. Development of a time-of-flight Compton camera prototype for online control of ion therapy and medical imaging

    International Nuclear Information System (INIS)

    Ley, Jean-Luc

    2015-01-01

    Hadron-therapy is one of the modalities available for treating cancer. This modality uses light ions (protons, carbon ions) to destroy cancer cells. Such particles have a ballistic accuracy thanks to their quasi-rectilinear trajectory, their path and the finished profile maximum dose in the end. Compared to conventional radiotherapy, this allows to spare the healthy tissue located adjacent downstream and upstream of the tumor. One of this modality's quality assurance challenges is to control the positioning of the dose deposited by ions in the patient. One possibility to perform this control is to detect the prompt gammas emitted during nuclear reactions induced along the ion path in the patient. A Compton camera prototype, theoretically allowing to maximize the detection efficiency of the prompt gammas, is being developed under a regional collaboration. This camera was the main focus of my thesis, and particularly the following points: i) studying, throughout Monte Carlo simulations, the operation of the prototype in construction, particularly with respect to the expected counting rates on the different types of accelerators in hadron-therapy ii) conducting simulation studies on the use of this camera in clinical imaging, iii) characterising the silicon detectors (scatterer) iv) confronting Geant4 simulations on the camera's response with measurements on the beam with the help of a demonstrator. As a result, the Compton camera prototype developed makes a control of the localization of the dose deposition in proton therapy to the scale of a spot possible, provided that the intensity of the clinical proton beam is reduced by a factor 200 (intensity of 10 8 protons/s). An application of the Compton camera in nuclear medicine seems to be attainable with the use of radioisotopes of an energy greater than 300 keV. These initial results must be confirmed by more realistic simulations (homogeneous and heterogeneous PMMA targets). Tests with the progressive

  3. OBLIQUE MULTI-CAMERA SYSTEMS – ORIENTATION AND DENSE MATCHING ISSUES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2014-03-01

    Full Text Available The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.. The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  4. QUALITY ASSESSMENT OF 3D RECONSTRUCTION USING FISHEYE AND PERSPECTIVE SENSORS

    Directory of Open Access Journals (Sweden)

    C. Strecha

    2015-03-01

    Full Text Available Recent mathematical advances, growing alongside the use of unmanned aerial vehicles, have not only overcome the restriction of roll and pitch angles during flight but also enabled us to apply non-metric cameras in photogrammetric method, providing more flexibility for sensor selection. Fisheye cameras, for example, advantageously provide images with wide coverage; however, these images are extremely distorted and their non-uniform resolutions make them more difficult to use for mapping or terrestrial 3D modelling. In this paper, we compare the usability of different camera-lens combinations, using the complete workflow implemented in Pix4Dmapper to achieve the final terrestrial reconstruction result of a well-known historical site in Switzerland: the Chillon Castle. We assess the accuracy of the outcome acquired by consumer cameras with perspective and fisheye lenses, comparing the results to a laser scanner point cloud.

  5. Fluorescence-enhanced optical imaging in large tissue volumes using a gain-modulated ICCD camera

    International Nuclear Information System (INIS)

    Godavarty, Anuradha; Eppstein, Margaret J; Zhang, Chaoyang; Theru, Sangeeta; Thompson, Alan B; Gurfinkel, Michael; Sevick-Muraca, Eva M

    2003-01-01

    A novel image-intensified charge-coupled device (ICCD) imaging system has been developed to perform 3D fluorescence tomographic imaging in the frequency-domain using near-infrared contrast agents. The imager is unique since it (i) employs a large tissue-mimicking phantom, which is shaped and sized to resemble a female breast and part of the extended chest-wall region, and (ii) enables rapid data acquisition in the frequency-domain by using a gain-modulated ICCD camera. Diffusion model predictions are compared to experimental measurements using two different referencing schemes under two different experimental conditions of perfect and imperfect uptake of fluorescent agent into a target. From these experimental measurements, three-dimensional images of fluorescent absorption were reconstructed using a computationally efficient variant of the approximate extended Kalman filter algorithm. The current work represents the first time that 3D fluorescence-enhanced optical tomographic reconstructions have been achieved from experimental measurements of the time-dependent light propagation on a clinically relevant breast-shaped tissue phantom using a gain-modulated ICCD camera

  6. Construct and face validity of a virtual reality-based camera navigation curriculum.

    Science.gov (United States)

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    Science.gov (United States)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  8. A flexible geometry Compton camera for industrial gamma ray imaging

    International Nuclear Information System (INIS)

    Royle, G.J.; Speller, R.D.

    1996-01-01

    A design for a Compton scatter camera is proposed which is applicable to gamma ray imaging within limited access industrial sites. The camera consists of a number of single element detectors arranged in a small cluster. Coincidence circuitry enables the detectors to act as a scatter camera. Positioning the detector cluster at various locations within the site, and subsequent reconstruction of the recorded data, allows an image to be obtained. The camera design allows flexibility to cater for limited space or access simply by positioning the detectors in the optimum geometric arrangement within the space allowed. The quality of the image will be limited but imaging could still be achieved in regions which are otherwise inaccessible. Computer simulation algorithms have been written to optimize the various parameters involved, such as geometrical arrangement of the detector cluster and the positioning of the cluster within the site, and to estimate the performance of such a device. Both scintillator and semiconductor detectors have been studied. A prototype camera has been constructed which operates three small single element detectors in coincidence. It has been tested in a laboratory simulation of an industrial site. This consisted of a small room (2 m wide x 1 m deep x 2 m high) into which the only access points were two 6 cm diameter holes in a side wall. Simple images of Cs-137 sources have been produced. The work described has been done on behalf of BNFL for applications at their Sellafield reprocessing plant in the UK

  9. Tomographic Reconstruction of Tracer Gas Concentration Profiles in a Room with the Use of a Single OP-FTIR and Two Iterative Algorithms: ART and PWLS.

    Science.gov (United States)

    Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  10. MODIFIED PATH METHODOLOGY FOR OBTAINING INTERVAL-SCALED POSTURAL ASSESSMENTS OF FARMWORKERS.

    Science.gov (United States)

    Garrison, Emma B; Dropkin, Jonathan; Russell, Rebecca; Jenkins, Paul

    2018-01-29

    Agricultural workers perform tasks that frequently require awkward and extreme postures that are associated with musculoskeletal disorders (MSDs). The PATH (Posture, Activity, Tools, Handling) system currently provides a sound methodology for quantifying workers' exposure to these awkward postures on an ordinal scale of measurement, which places restrictions on the choice of analytic methods. This study reports a modification of the PATH methodology that instead captures these postures as degrees of flexion, an interval-scaled measurement. Rather than making live observations in the field, as in PATH, the postural assessments were performed on photographs using ImageJ photo analysis software. Capturing the postures in photographs permitted more careful measurement of the degrees of flexion. The current PATH methodology requires that the observer in the field be trained in the use of PATH, whereas the single photographer used in this modification requires only sufficient training to maintain the proper camera angle. Ultimately, these interval-scale measurements could be combined with other quantitative measures, such as those produced by electromyograms (EMGs), to provide more sophisticated estimates of future risk for MSDs. Further, these data can provide a baseline from which the effects of interventions designed to reduce hazardous postures can be calculated with greater precision. Copyright© by the American Society of Agricultural Engineers.

  11. The cloud monitor by an infrared camera at the Telescope Array experiment

    International Nuclear Information System (INIS)

    Shibata, F.

    2011-01-01

    The mesurement of the extensive air shower using the fluorescence detectors (FDs) is affected by the condition of the atmosphere. In particular, FD aperture is limited by cloudiness. If cloud exists on the light path from extensive air shower to FDs, fluorescence photons will be absorbed drastically. Therefore cloudiness of FD's field of view (FOV) is one of important quality cut condition in FD analysis. In the Telescope Array (TA), an infrared (IR) camera with 320x236 pixels and a filed of view of 25.8 deg. x19.5 deg. has been installed at an observation site for cloud monitoring during FD observations. This IR camera measures temperature of the sky every 30 min during FD observation. IR camera is mounted on steering table, which can be changed in elevation and azimuthal direction. Clouds can be seen at a higher temperature than areas of cloudless sky from these temperature maps. In this paper, we discuss the quality of the cloud monitoring data, the analysis method, and current quality cut condition of cloudiness in FD analysis.

  12. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    Science.gov (United States)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  13. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  14. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    Science.gov (United States)

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  15. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby...... the scanning time as well as the amount of interaction between the AFM probe and the specimen. It can easily be applied on conventional AFM hardware. Due to undersampling, it is then necessary to further process the acquired image in order to reconstruct an approximation of the image. Based on real AFM cell...... images, our simulations reveal that using a simple raster scanning pattern in combination with conventional image interpolation performs very well. Moreover, this combination enables a reduction by a factor 10 of the scanning time while retaining an average reconstruction quality around 36 dB PSNR...

  16. Application of Super-Resolution Image Reconstruction to Digital Holography

    Directory of Open Access Journals (Sweden)

    Zhang Shuqun

    2006-01-01

    Full Text Available We describe a new application of super-resolution image reconstruction to digital holography which is a technique for three-dimensional information recording and reconstruction. Digital holography has suffered from the low resolution of CCD sensors, which significantly limits the size of objects that can be recorded. The existing solution to this problem is to use optics to bandlimit the object to be recorded, which can cause the loss of details. Here super-resolution image reconstruction is proposed to be applied in enhancing the spatial resolution of digital holograms. By introducing a global camera translation before sampling, a high-resolution hologram can be reconstructed from a set of undersampled hologram images. This permits the recording of larger objects and reduces the distance between the object and the hologram. Practical results from real and simulated holograms are presented to demonstrate the feasibility of the proposed technique.

  17. Observations of the Perseids 2012 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.

    2012-09-01

    The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories

  18. Proton computed tomography images with algebraic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Bruzzi, M. [Physics and Astronomy Department, University of Florence, Florence (Italy); Civinini, C.; Scaringella, M. [INFN - Florence Division, Florence (Italy); Bonanno, D. [INFN - Catania Division, Catania (Italy); Brianzi, M. [INFN - Florence Division, Florence (Italy); Carpinelli, M. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Cirrone, G.A.P.; Cuttone, G. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Presti, D. Lo [INFN - Catania Division, Catania (Italy); Physics and Astronomy Department, University of Catania, Catania (Italy); Maccioni, G. [INFN – Cagliari Division, Cagliari (Italy); Pallotta, S. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Randazzo, N. [INFN - Catania Division, Catania (Italy); Romano, F. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Sipala, V. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Talamonti, C. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Vanzi, E. [Fisica Sanitaria, Azienda Ospedaliero-Universitaria Senese, Siena (Italy)

    2017-02-11

    A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to ~1% and spatial resolutions <1 mm, achieved within processing times of ~15′ for a 512×512 pixels image prove that this technique will be beneficial if used instead of X-CT in hadron-therapy.

  19. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  20. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  1. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  2. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  3. The Influence of Face Angle and Club Path on the Resultant Launch Angle of a Golf Ball

    Directory of Open Access Journals (Sweden)

    Paul Wood

    2018-02-01

    Full Text Available A two-part experimental study was conducted in order to better understand how the delivered face angle and club path of a golf club influences the initial launch direction of a golf ball for various club types. A robust understanding of how these parameters influence the ball direction has implications for both coaches and club designers. The first study used a large sample of golfers hitting shots with different clubs. Initial ball direction was measured with a Foresight Sports camera system, while club delivery parameters were recorded with a Vicon motion capture system. The second study used a golf robot and Vision Research camera to measure club and ball parameters. Results from these experiments show that the launch direction fell closer to face angle than club path. The percent toward the face angle ranged from 61% to 83%, where 100% designates a launch angle entirely toward the face angle.

  4. Photometric Lunar Surface Reconstruction

    Science.gov (United States)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  5. Speed of sound and photoacoustic imaging with an optical camera based ultrasound detection system

    Science.gov (United States)

    Nuster, Robert; Paltauf, Guenther

    2017-07-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  6. 3D tomographic imaging with the γ-eye planar scintigraphic gamma camera

    Science.gov (United States)

    Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.

    2017-11-01

    γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.

  7. A Leader-path-following formation system for AGVs with multi-sensor data fusion based vehicle tracking

    Science.gov (United States)

    Yao, Wen; Zhao, Xijun; Yu, Yufeng; Fang, Yongkun; Wang, Chao; Yang, Tianfu

    2017-09-01

    Caravans composed of vehicles with different functionality or trafficability raise the demand that formation system structure shall allow vehicles to deviate from the path to be followed when necessary. In this paper, a formation system is developed for autonomous ground vehicles (AGVs) who follow the path of a leader vehicle while retaining the ability of deviation from the reference path. In addition, it improves robustness of preceding vehicle localization by fusing Lidar tracking, camera tracking results with predecessor’s global position within an extended Kalman filter (EKF) in case that one or more sources of preceding vehicle localization is not reliable. The system is applied on real AGV platforms and won the 3rd place in an AGV competition in China.

  8. Study of a new architecture of gamma cameras with Cd/ZnTe/CdTe semiconductors; Etude d'une nouvelle architecture de gamma camera a base de semi-conducteurs CdZnTe /CdTe

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, L

    2007-11-15

    This thesis studies new semi conductors for gammas cameras in order to improve the quality of image in nuclear medicine. The chapter 1 reminds the general principle of the imaging gamma, by describing the radiotracers, the channel of detection and the types of Anger gamma cameras acquisition. The physiological, physical and technological limits of the camera are then highlighted, to better identify the needs of future gamma cameras. The chapter 2 is dedicated to a bibliographical study. At first, semi-conductors used in imaging gamma are presented, and more particularly semi-conductors CDTE and CdZnTe, by distinguishing planar detectors and monolithic pixelated detectors. Secondly, the classic collimators of the gamma cameras, used in clinical routine for the most part of between them, are described. Their geometry is presented, as well as their characteristics, their advantages and their inconveniences. The chapter 3 is dedicated to a state of art of the simulation codes dedicated to the medical imaging and the methods of reconstruction in imaging gamma. These states of art allow to introduce the software of simulation and the methods of reconstruction used within the framework of this thesis. The chapter 4 presents the new architecture of gamma camera proposed during this work of thesis. It is structured in three parts. The first part justifies the use of semiconducting detectors CdZnTe, in particular the monolithic pixelated detectors, by bringing to light their advantages with regard to the detection modules based on scintillator. The second part presents gamma cameras to base of detectors CdZnTe (prototypes or commercial products) and their associated collimators, as well as the interest of an association of detectors CdZnTe in the classic collimators. Finally, the third part presents in detail the HiSens architecture. The chapter 5 describes both software of simulation used within the framework of this thesis to estimate the performances of the Hi

  9. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  10. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  11. Cavlectometry: Towards holistic reconstruction of large mirror objects

    KAUST Repository

    Balzer, Jonathan

    2014-12-01

    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. © 2014 IEEE.

  12. Ceres Photometry and Albedo from Dawn Framing Camera Images

    Science.gov (United States)

    Schröder, S. E.; Mottola, S.; Keller, H. U.; Li, J.-Y.; Matz, K.-D.; Otto, K.; Roatsch, T.; Stephan, K.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    The Dawn spacecraft is in orbit around dwarf planet Ceres. The onboard Framing Camera (FC) [1] is mapping the surface through a clear filter and 7 narrow-band filters at various observational geometries. Generally, Ceres' appearance in these images is affected by shadows and shading, effects which become stronger for larger solar phase angles, obscuring the intrinsic reflective properties of the surface. By means of photometric modeling we attempt to remove these effects and reconstruct the surface albedo over the full visible wavelength range. Knowledge of the albedo distribution will contribute to our understanding of the physical nature and composition of the surface.

  13. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  14. Simultaneous determination of sample thickness, tilt, and electron mean free path using tomographic tilt images based on Beer-Lambert law.

    Science.gov (United States)

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. A METHOD FOR SELF-CALIBRATION IN SATELLITE WITH HIGH PRECISION OF SPACE LINEAR ARRAY CAMERA

    Directory of Open Access Journals (Sweden)

    W. Liu

    2016-06-01

    Full Text Available At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera’s change regulation can be mastered accurately and the camera’s attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.

  16. Compton camera study for high efficiency SPECT and benchmark with Anger system

    Science.gov (United States)

    Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.

    2017-12-01

    Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application

  17. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  18. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  19. Hyperspectral imaging using a color camera and its application for pathogen detection

    Science.gov (United States)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  20. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  1. Use of the geometric mean of opposing planar projections in pre-reconstruction restoration of SPECT images

    International Nuclear Information System (INIS)

    Boulfelfel, D.; Rangayyan, R.M.; Hahn, L.J.; Kloiber, R.

    1992-01-01

    This paper presents a restoration scheme for single photon emission computed tomography (SPECT) images that performs restoration before reconstruction (pre-reconstruction restoration) from planar (projection) images. In this scheme, the pixel-by-pixel geometric mean of each pair of opposing (conjugate) planar projections is computed prior to the reconstruction process. The averaging process is shown to help in making the degradation phenomenon less dependent on the distance of each point of the object from the camera. The restoration filters investigated are the Wiener and power spectrum equalization filters. (author)

  2. STREAK CAMERA MEASUREMENTS OF THE APS PC GUN DRIVE LASER

    Energy Technology Data Exchange (ETDEWEB)

    Dooling, J. C.; Lumpkin, A. H.

    2017-06-25

    We report recent pulse-duration measurements of the APS PC Gun drive laser at both second harmonic and fourth harmonic wavelengths. The drive laser is a Nd:Glass-based chirped pulsed amplifier (CPA) operating at an IR wavelength of 1053 nm, twice frequency-doubled to obtain UV output for the gun. A Hamamatsu C5680 streak camera and an M5675 synchroscan unit are used for these measurements; the synchroscan unit is tuned to 119 MHz, the 24th subharmonic of the linac s-band operating frequency. Calibration is accomplished both electronically and optically. Electronic calibration utilizes a programmable delay line in the 119 MHz rf path. The optical delay uses an etalon with known spacing between reflecting surfaces and is coated for the visible, SH wavelength. IR pulse duration is monitored with an autocorrelator. Fitting the streak camera image projected profiles with Gaussians, UV rms pulse durations are found to vary from 2.1 ps to 3.5 ps as the IR varies from 2.2 ps to 5.2 ps.

  3. COMPARISON BETWEEN RGB AND RGB-D CAMERAS FOR SUPPORTING LOW-COST GNSS URBAN NAVIGATION

    Directory of Open Access Journals (Sweden)

    L. Rossi

    2018-05-01

    Full Text Available A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  4. Development of the geoCamera, a System for Mapping Ice from a Ship

    Science.gov (United States)

    Arsenault, R.; Clemente-Colon, P.

    2012-12-01

    The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.

  5. Automatic Texture Optimization for 3D Urban Reconstruction

    Directory of Open Access Journals (Sweden)

    LI Ming

    2017-03-01

    Full Text Available In order to solve the problem of texture optimization in 3D city reconstruction by using multi-lens oblique images, the paper presents a method of seamless texture model reconstruction. At first, it corrects the radiation information of images by camera response functions and image dark channel. Then, according to the corresponding relevance between terrain triangular mesh surface model to image, implements occlusion detection by sparse triangulation method, and establishes the triangles' texture list of visible. Finally, combines with triangles' topology relationship in 3D triangular mesh surface model and means and variances of image, constructs a graph-cuts-based texture optimization algorithm under the framework of MRF(Markov random filed, to solve the discrete label problem of texture optimization selection and clustering, ensures the consistency of the adjacent triangles in texture mapping, achieves the seamless texture reconstruction of city. The experimental results verify the validity and superiority of our proposed method.

  6. Research and Implementation of Robot Path Planning Based onVSLAM

    Directory of Open Access Journals (Sweden)

    Wang Zi-Qiang

    2018-01-01

    Full Text Available In order to solve the problem of warehouse logistics robots planpath in different scenes, this paper proposes a method based on visual simultaneous localization and mapping (VSLAM to build grid map of different scenes and use A* algorithm to plan path on the grid map. Firstly, we use VSLAMto reconstruct the environment in three-dimensionally. Secondly, based on the three-dimensional environment data, we calculate the accessibility of each grid to prepare occupied grid map (OGM for terrain description. Rely on the terrain information, we use the A* algorithm to solve path planning problem. We also optimize the A* algorithm and improve algorithm efficiency. Lastly, we verify the effectiveness and reliability of the proposed method by simulation and experimental results.

  7. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  8. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates

  9. Real-Time 3d Reconstruction from Images Taken from AN Uav

    Science.gov (United States)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  10. IMAGE ACQUISITION CONSTRAINTS FOR PANORAMIC FRAME CAMERA IMAGING

    Directory of Open Access Journals (Sweden)

    H. Kauhanen

    2012-07-01

    Full Text Available The paper describes an approach to quantify the amount of projective error produced by an offset of projection centres in a panoramic imaging workflow. We have limited this research to such panoramic workflows in which several sub-images using planar image sensor are taken and then stitched together as a large panoramic image mosaic. The aim is to simulate how large the offset can be before it introduces significant error to the dataset. The method uses geometrical analysis to calculate the error in various cases. Constraints for shooting distance, focal length and the depth of the area of interest are taken into account. Considering these constraints, it is possible to safely use even poorly calibrated panoramic camera rig with noticeable offset in projection centre locations. The aim is to create datasets suited for photogrammetric reconstruction. Similar constraints can be used also for finding recommended areas from the image planes for automatic feature matching and thus improve stitching of sub-images into full panoramic mosaics. The results are mainly designed to be used with long focal length cameras where the offset of projection centre of sub-images can seem to be significant but on the other hand the shooting distance is also long. We show that in such situations the error introduced by the offset of the projection centres results only in negligible error when stitching a metric panorama. Even if the main use of the results is with cameras of long focal length, they are feasible for all focal lengths.

  11. Reconstruction of multiple line source attenuation maps

    International Nuclear Information System (INIS)

    Celler, A.; Sitek, A.; Harrop, R.

    1996-01-01

    A simple configuration for a transmission source for the single photon emission computed tomography (SPECT) was proposed, which utilizes a series of collimated line sources parallel to the axis of rotation of a camera. The detector is equipped with a standard parallel hole collimator. We have demonstrated that this type of source configuration can be used to generate sufficient data for the reconstruction of the attenuation map when using 8-10 line sources spaced by 3.5-4.5 cm for a 30 x 40cm detector at 65cm distance from the sources. Transmission data for a nonuniform thorax phantom was simulated, then binned and reconstructed using filtered backprojection (FBP) and iterative methods. The optimum maps are obtained with data binned into 2-3 bins and FBP reconstruction. The activity in the source was investigated for uniform and exponential activity distributions, as well as the effect of gaps and overlaps of the neighboring fan beams. A prototype of the line source has been built and the experimental verification of the technique has started

  12. Reconstructing the 1935 Columbia River Gorge: A Topographic and Orthophoto Experiment

    Science.gov (United States)

    Fonstad, M. A.; Major, J. H.; O'Connor, J. E.; Dietrich, J. T.

    2017-12-01

    The last decade has seen a revolution in the mapping of rivers and near-river environments. Much of this has been associated with a new type of photogrammetry: structure from motion (SfM) and multi-view stereo techniques. Through SfM, 3D surfaces are reconstructed from nonstructured image groups with poorly calibrated cameras whose locations need not be known. Modern SfM imaging is greatly improved by careful flight planning, well-planned ground control or high-precision direct georeferencing, and well-understood camera optics. The ease of SfM, however, begs the question: how well does it work on archival photos taken without the foreknowledge of SfM techniques? In 1935, the Army Corps of Engineers took over 800 vertical aerial photos for a 160-km-long stretch of the Columbia River Gorge and adjacent areas in Oregon and Washington. These photos pre-date completion of three hydroelectric dams and reservoirs in this reach, and thus provide rich information on the historic pre-dam riverine, geologic, and cultural environments. These photos have little to no metadata associated with them, such as camera calibration reports, so traditional photogrammetry techniques are exceeding difficult to apply. Instead, we apply SfM to these archival photos, and test the resulting digital elevation model (DEM) against lidar data for features inferred to be unchanged in the past 80 years. Few, if any, of the quality controls recommended for SfM are available for these 1935 photos; they are scanned paper positives with little overlap taken with an unknown optical system in high altitude flight paths. Nevertheless, in almost all areas, the SfM analysis produced a high quality orthophoto of the gorge with low horizontal errors - most better than a few meters. The DEM created looks highly realistic, and in many areas has a vertical error of a few meters. However, the vertical errors are spatially inconsistent, with some wildly large, likely because of the many poorly constrained links in

  13. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  14. Image reconstruction under non-Gaussian noise

    DEFF Research Database (Denmark)

    Sciacchitano, Federica

    During acquisition and transmission, images are often blurred and corrupted by noise. One of the fundamental tasks of image processing is to reconstruct the clean image from a degraded version. The process of recovering the original image from the data is an example of inverse problem. Due...... to the ill-posedness of the problem, the simple inversion of the degradation model does not give any good reconstructions. Therefore, to deal with the ill-posedness it is necessary to use some prior information on the solution or the model and the Bayesian approach. Additive Gaussian noise has been......D thesis intends to solve some of the many open questions for image restoration under non-Gaussian noise. The two main kinds of noise studied in this PhD project are the impulse noise and the Cauchy noise. Impulse noise is due to for instance the malfunctioning pixel elements in the camera sensors, errors...

  15. Design study of a Compton camera for prompts-gamma imaging during ion beam therapy

    International Nuclear Information System (INIS)

    Richard, Marie-Helene

    2012-01-01

    Ion beam therapy is an innovative radiotherapy technique using mainly carbon ion and proton irradiations. Its aim is to improve the current treatment modalities. Because of the sharpness of the dose distributions, a control of the dose if possible in real time is highly desirable. A possibility is to detect the prompt gamma rays emitted subsequently to the nuclear fragmentations occurring during the treatment of the patient. In a first time two different Compton cameras (double and single scattering) have been optimised by means of Monte Carlo simulations. The response of the camera to a photon point source with a realistic energy spectrum was studied. Then, the response of the camera to the irradiation of a water phantom by a proton beam was simulated. It was first compared with measurement performed with small-size detectors. Then, using the previous measurements, we evaluated the counting rates expected in clinical conditions. In the current set-up of the camera, these counting rates are pretty high. Pile up and random coincidences will be problematic. Finally we demonstrate that the detection system is capable to detect a longitudinal shift in the Bragg peak of ± 5 mm, even with the current reconstruction algorithm. (author)

  16. On camera-based smoke and gas leakage detection

    Energy Technology Data Exchange (ETDEWEB)

    Nyboe, Hans Olav

    1999-07-01

    Gas detectors are found in almost every part of industry and in many homes as well. An offshore oil or gas platform may host several hundred gas detectors. The ability of the common point and open path gas detectors to detect leakages depends on their location relative to the location of a gas cloud. This thesis describes the development of a passive volume gas detector, that is, one than will detect a leakage anywhere in the area monitored. After the consideration of several detection techniques it was decided to use an ordinary monochrome camera as sensor. Because a gas leakage may perturb the index of refraction, parts of the background appear to be displaced from their true positions, and it is necessary to develop algorithms that can deal with small differences between images. The thesis develops two such algorithms. Many image regions can be defined and several feature values can be computed for each region. The value of the features depends on the pattern in the image regions. The classes studied in this work are: reference, gas, smoke and human activity. Test show that observation belonging to these classes can be classified fairly high accuracy. The features in the feature set were chosen and developed for this particular application. Basically, the features measure the magnitude of pixel differences, size of detected phenomena and image distortion. Interesting results from many experiments are presented. Most important, the experiments show that apparent motion caused by a gas leakage or heat convection can be detected by means of a monochrome camera. Small leakages of methane can be detected at a range of about four metres. Other gases, such as butane, where the densities differ more from the density of air than the density of methane does, can be detected further from the camera. Gas leakages large enough to cause condensation have been detected at a camera distance of 20 metres. 59 refs., 42 figs., 13 tabs.

  17. System architecture for high speed reconstruction in time-of-flight positron tomography

    International Nuclear Information System (INIS)

    Campagnolo, R.E.; Bouvier, A.; Chabanas, L.; Robert, C.

    1985-06-01

    A new generation of Time Of Flight (TOF) positron tomograph with high resolution and high count rate capabilities is under development in our group. After a short recall of the data acquisition process and image reconstruction in a TOF PET camera, we present the data acquisition system which achieves a data transfer rate of 0.8 mega events per second or more if necessary in list mode. We describe the reconstruction process based on a five stages pipe line architecture using home made processors. The expected performance with this architecture is a time reconstruction of six seconds per image (256x256 pixels) of one million events. This time could be reduce to 4 seconds. We conclude with the future developments of the system

  18. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  19. Positron Emission Tomography with Three-Dimensional Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Erlandsson, K.

    1996-10-01

    The development of two different low-cost scanners for positron emission tomography (PET) based on 3D acquisition are presented. The first scanner consists of two rotating scintillation cameras, and produces quantitative images, which have shown to be clinically useful. The second one is a system with two opposed sets of detectors, based on the limited angle tomography principle, dedicated for mammographic studies. The development of low-cost PET scanners can increase the clinical impact of PET, which is an expensive modality, only available at a few centres world-wide and mainly used as a research tool. A 3D reconstruction method was developed that utilizes all the available data. The size of the data-sets is considerably reduced, using the single-slice rebinning approximation. The 3D reconstruction is divided into 1D axial deconvolution and 2D transaxial reconstruction, which makes it relatively fast. This method was developed for the rotating scanner, but was also implemented for multi-ring scanners with and without inter plane septa. An iterative 3D reconstruction method was developed for the limited angle scanner, based on the new concept of `mobile pixels`, which reduces the finite pixel errors and leads to an improved signal to noise ratio. 100 refs.

  20. Positron Emission Tomography with Three-Dimensional Reconstruction

    International Nuclear Information System (INIS)

    Erlandsson, K.

    1996-10-01

    The development of two different low-cost scanners for positron emission tomography (PET) based on 3D acquisition are presented. The first scanner consists of two rotating scintillation cameras, and produces quantitative images, which have shown to be clinically useful. The second one is a system with two opposed sets of detectors, based on the limited angle tomography principle, dedicated for mammographic studies. The development of low-cost PET scanners can increase the clinical impact of PET, which is an expensive modality, only available at a few centres world-wide and mainly used as a research tool. A 3D reconstruction method was developed that utilizes all the available data. The size of the data-sets is considerably reduced, using the single-slice rebinning approximation. The 3D reconstruction is divided into 1D axial deconvolution and 2D transaxial reconstruction, which makes it relatively fast. This method was developed for the rotating scanner, but was also implemented for multi-ring scanners with and without inter plane septa. An iterative 3D reconstruction method was developed for the limited angle scanner, based on the new concept of 'mobile pixels', which reduces the finite pixel errors and leads to an improved signal to noise ratio. 100 refs

  1. Hard paths, soft paths or no paths? Cross-cultural perceptions of water solutions

    Science.gov (United States)

    Wutich, A.; White, A. C.; White, D. D.; Larson, K. L.; Brewis, A.; Roberts, C.

    2014-01-01

    In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences associated with development status and, to a lesser extent, water scarcity. People in the two less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in the more developed sites. Thematically, people in the two less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community-based solutions, while people in the more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in the two water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in the water-rich sites. Thematically, people in the two water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in the water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

  2. In-process 3D geometry reconstruction of objects produced by direct light projection

    DEFF Research Database (Denmark)

    Andersen, Ulrik Vølcker; Pedersen, David Bue; Hansen, Hans Nørgaard

    2013-01-01

    al. 2011), this method has shown its potential with 3D printing (3DP) and selective laser sintering additive manufacturing processes, where it is possible to directly capture the geometrical features of each individual layer during a build job using a digital camera. When considering the process...... equipment such as coordinate measuring machines cannot be verified easily. This problem is addressed by developing an in-line reverse engineering and 3D reconstruction method that allows a true-to-scale reconstruction of a part being additively manufactured. In earlier works (Pedersen et al. 2010; Hansen et...

  3. Path coupling and aggregate path coupling

    CERN Document Server

    Kovchegov, Yevgeniy

    2018-01-01

    This book describes and characterizes an extension to the classical path coupling method applied to statistical mechanical models, referred to as aggregate path coupling. In conjunction with large deviations estimates, the aggregate path coupling method is used to prove rapid mixing of Glauber dynamics for a large class of statistical mechanical models, including models that exhibit discontinuous phase transitions which have traditionally been more difficult to analyze rigorously. The book shows how the parameter regions for rapid mixing for several classes of statistical mechanical models are derived using the aggregate path coupling method.

  4. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  5. Hanford Environmental Dose Reconstruction Project monthly report

    International Nuclear Information System (INIS)

    Finch, S.M.

    1991-10-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doeses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates

  6. Pinhole single-photon emission tomography reconstruction based on median root prior

    International Nuclear Information System (INIS)

    Sohlberg, Antti; Kuikka, Jyrki T.; Ruotsalainen, Ulla

    2003-01-01

    The maximum likelihood expectation maximisation (ML-EM) algorithm can be used to reduce reconstruction artefacts produced by filtered backprojection (FBP) methods in pinhole single-photon emission tomography (SPET). However, ML-EM suffers from noise propagation along iterations, which leads to quantitatively unpleasant reconstruction results. To avoid this increase in noise, the median root prior (MRP) algorithm for pinhole SPET was implemented. Projection data of a line source and Picker's thyroid phantom were collected using a single-head gamma camera with a pinhole collimator. MRP was added to existing pinhole ML-EM reconstruction algorithm and the phantom studies were reconstructed using MRP, ML-EM and FBP for comparison. Coefficients of variation, contrasts and full-widths at half-maximum were calculated and showed a clear reduction in noise without significant loss of resolution or decrease in contrast when MRP was applied. MRP also produced visually pleasing images even with high iteration numbers, free of the checkerboard-type noise patterns which are typical of ML-EM images. (orig.)

  7. Reconstructing building mass models from UAV images

    KAUST Repository

    Li, Minglei

    2015-07-26

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.

  8. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  9. Realisation of a gamma emission tomograph by a servo-controlled camera and bed

    International Nuclear Information System (INIS)

    Guzman-Torres, D.R.

    1980-07-01

    We took part in the building of a transverse axial emission tomograph intended for nuclear medicine. The following three points were dealt with: mathematical, choice of processing algorithm; electronic, development of equipment; experimental, testing of the system built. On the mathematical side, following a survey of reconstruction methods, we studied the use of a reconstruction algorithm after filtering of the projections by convolution which gives a good spatial resolution. We also proposed a means to solve the computing time/quality of image problem, leading to a satisfactory result within a shorter total investigation time. In this way the computing time has been reduced by a factor three. In the electronics field we built an interface between the bed, the gamma camera and the computer already in the laboratory. The present instrument corresponds to version no. 2. The system control the bed and gamma camera which are operated from the computer. Experimentally we were able on checking the calculations with a phantom made up of small emitting sources, to prove by finding the exact spot our ability to locate active foci on the patient. While the results obtained are encouraging from the image restitution viewpoint, the study of problems related to self-absorption inside the organ and those of statistical noise have still to be continued [fr

  10. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    Science.gov (United States)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  11. Performance of a direct detection camera for off-axis electron holography

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Shery L.Y., E-mail: shery.chang@asu.edu [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); LeRoy Eyring Center for Solid State Science, Arizona State University, Tempe, AZ 85287 (United States); Dwyer, Christian [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Department of Physics, Arizona State University, Tempe, AZ 85287 (United States); Barthel, Juri; Boothroyd, Chris B.; Dunin-Borkowski, Rafal E. [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany)

    2016-02-15

    The performance of a direct detection camera (DDC) is evaluated in the context of off-axis electron holographic experiments in a transmission electron microscope. Its performance is also compared directly with that of a conventional charge-coupled device (CCD) camera. The DDC evaluated here can be operated either by the detection of individual electron events (counting mode) or by the effective integration of many such events during a given exposure time (linear mode). It is demonstrated that the improved modulation transfer functions and detective quantum efficiencies of both modes of the DDC give rise to significant benefits over the conventional CCD cameras, specifically, a significant improvement in the visibility of the holographic fringes and a reduction of the statistical error in the phase of the reconstructed electron wave function. The DDC's linear mode, which can handle higher dose rates, allows optimisation of the dose rate to achieve the best phase resolution for a wide variety of experimental conditions. For suitable conditions, the counting mode can potentially utilise a significantly lower dose to achieve a phase resolution that is comparable to that achieved using the linear mode. The use of multiple holograms and correlation techniques to increase the total dose in counting mode is also demonstrated. - Highlights: • Performance of a direct detection camera for off-axis electron holography has been evaluated. • Better holographic fringe visibility and phase resolution are achieved using DDC. • Both counting and linear modes offered by DDC are advantageous for different dose regimes.

  12. Performance of a direct detection camera for off-axis electron holography

    International Nuclear Information System (INIS)

    Chang, Shery L.Y.; Dwyer, Christian; Barthel, Juri; Boothroyd, Chris B.; Dunin-Borkowski, Rafal E.

    2016-01-01

    The performance of a direct detection camera (DDC) is evaluated in the context of off-axis electron holographic experiments in a transmission electron microscope. Its performance is also compared directly with that of a conventional charge-coupled device (CCD) camera. The DDC evaluated here can be operated either by the detection of individual electron events (counting mode) or by the effective integration of many such events during a given exposure time (linear mode). It is demonstrated that the improved modulation transfer functions and detective quantum efficiencies of both modes of the DDC give rise to significant benefits over the conventional CCD cameras, specifically, a significant improvement in the visibility of the holographic fringes and a reduction of the statistical error in the phase of the reconstructed electron wave function. The DDC's linear mode, which can handle higher dose rates, allows optimisation of the dose rate to achieve the best phase resolution for a wide variety of experimental conditions. For suitable conditions, the counting mode can potentially utilise a significantly lower dose to achieve a phase resolution that is comparable to that achieved using the linear mode. The use of multiple holograms and correlation techniques to increase the total dose in counting mode is also demonstrated. - Highlights: • Performance of a direct detection camera for off-axis electron holography has been evaluated. • Better holographic fringe visibility and phase resolution are achieved using DDC. • Both counting and linear modes offered by DDC are advantageous for different dose regimes.

  13. PROCEDURE ENABLING SIMULATION AND IN-DEPTH ANALYSIS OF OPTICAL EFFECTS IN CAMERA-BASED TIME-OF-FLIGHT SENSORS

    Directory of Open Access Journals (Sweden)

    M. Baumgart

    2018-05-01

    Full Text Available This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.

  14. Path Creation, Path Dependence and Breaking Away from the Path

    OpenAIRE

    Wang, Jens; Hedman, Jonas; Tuunainen, Virpi Kristiina

    2016-01-01

    The explanation of how and why firms succeed or fail is a recurrent research challenge. This is particularly important in the context of technological innovations. We focus on the role of historical events and decisions in explaining such success and failure. Using a case study of Nokia, we develop and extend a multi-layer path dependence framework. We identify four layers of path dependence: technical, strategic and leadership, organizational, and external collaboration. We show how path dep...

  15. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  16. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  17. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    Science.gov (United States)

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  18. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  19. POLE PHOTOGRAMMETRY WITH AN ACTION CAMERA FOR FAST AND ACCURATE SURFACE MAPPING

    Directory of Open Access Journals (Sweden)

    J. A. Gonçalves

    2016-06-01

    Full Text Available High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point, which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  20. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    Science.gov (United States)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  1. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  2. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  3. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  4. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  5. Improved depth estimation with the light field camera

    Science.gov (United States)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  6. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Finch, S.M.; McMakin, A.H.

    1991-04-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from released to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; and, environmental pathways and dose estimates

  7. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  8. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    Energy Technology Data Exchange (ETDEWEB)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P

    2000-07-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10{sup -2} seems possible in the near future. (author)

  9. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    International Nuclear Information System (INIS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P.

    2000-01-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10 -2 seems possible in the near future. (author)

  10. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  11. Optical design of the comet Shoemaker-Levy speckle camera

    Energy Technology Data Exchange (ETDEWEB)

    Bissinger, H. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    An optical design is presented in which the Lick 3 meter telescope and a bare CCD speckle camera system was used to image the collision sites of the Shoemaker-Levy 9 comet with the Planet Jupiter. The brief overview includes of the optical constraints and system layout. The choice of a Risley prism combination to compensate for the time dependent atmospheric chromatic changes are described. Plate scale and signal-to-noise ratio curves resulting from imaging reference stars are compared with theory. Comparisons between un-corrected and reconstructed images of Jupiter`s impact sites. The results confirm that speckle imaging techniques can be used over an extended time period to provide a method to image large extended objects.

  12. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  13. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  14. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y., E-mail: mejia_famerp@yahoo.com.b [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Biologia Molecular; Castro, A.A. de; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica; Leite, J.P. [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Fac. de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Braga, J. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Astrofisica

    2010-11-15

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology. (author)

  15. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  16. Correction of head motion artifacts in SPECT with fully 3-D OS-EM reconstruction

    International Nuclear Information System (INIS)

    Fulton, R.R.

    1998-01-01

    Full text: A method which relies on continuous monitoring of head position has been developed to correct for head motion in SPECT studies of the brain. Head position and orientation are monitored during data acquisition by an inexpensive head tracking system (ADL-1, Shooting Star Technology, Rosedale, British Colombia). Motion correction involves changing the projection geometry to compensate for motion (using data from the head tracker), and reconstructing with a fully 3-D OS-EM algorithm. The reconstruction algorithm can accommodate any number of movements and any projection geometry. A single iteration of 3-D OS-EM using all available projections provides a satisfactory 3-D reconstruction, essentially free of motion artifacts. The method has been validated in studies of the 3-D Hoffman brain phantom. Multiple 36- degree acquisitions, each with the phantom in a different position, were performed on a Trionix triple head camera. Movements were simulated by combining projections from the different acquisitions. Accuracy was assessed by comparison with a motion-free reconstruction, visually and by calculating mean squared error (MSE). Motion correction reduced distortion perceptibly and, depending on the motions applied, improved MSE by up to an order of magnitude. Three-dimensional reconstruction of the 128 x 128 x 128 data set took 2- minutes on a SUN Ultra 1 workstation. This motion correction technique can be retro-fitted to existing SPECT systems and could be incorporated in future SPECT camera designs. It appears to be applicable in PET as well as SPECT, to be able to correct for any head movements, and to have the potential to improve the accuracy of tomographic brain studies under clinical imaging conditions

  17. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    Directory of Open Access Journals (Sweden)

    Heegwang Kim

    2017-12-01

    Full Text Available Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  18. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    Science.gov (United States)

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  19. PHOTOMETRIC STEREO SHAPE-AND-ALBEDO-FROM-SHADING FOR PIXEL-LEVEL RESOLUTION LUNAR SURFACE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    W. C. Liu

    2017-07-01

    Full Text Available Shape and Albedo from Shading (SAfS techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images, pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC photometric stereo images, the reconstructed topography (i.e., the DEM is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve

  20. Teaching quantum physics by the sum over paths approach and GeoGebra simulations

    International Nuclear Information System (INIS)

    Malgieri, M; Onorato, P; De Ambrosis, A

    2014-01-01

    We present a research-based teaching sequence in introductory quantum physics using the Feynman sum over paths approach. Our reconstruction avoids the historical pathway, and starts by reconsidering optics from the standpoint of the quantum nature of light, analysing both traditional and modern experiments. The core of our educational path lies in the treatment of conceptual and epistemological themes, peculiar of quantum theory, based on evidence from quantum optics, such as the single photon Mach–Zehnder and Zhou–Wang–Mandel experiments. The sequence is supported by a collection of interactive simulations, realized in the open source GeoGebra environment, which we used to assist students in learning the basics of the method, and help them explore the proposed experimental situations as modeled in the sum over paths perspective. We tested our approach in the context of a post-graduate training course for pre-service physics teachers; according to the data we collected, student teachers displayed a greatly improved understanding of conceptual issues, and acquired significant abilities in using the sum over path method for problem solving. (paper)

  1. The Polyakov path integral over bordered surfaces 3 (The BRST extended closed string off-shell amplitudes)

    International Nuclear Information System (INIS)

    Jaskolski, Z.

    1991-05-01

    The geometrical approach to the functional integral over Faddeev-Popov ghost fields is developed and applied to construct the BRST extension of the off-shell closed string amplitudes in the constant curvature gauge. In this gauge the overlap path integral for off-shell amplitudes is evaluated. It leads to the nonlocal sewing procedure generating all off-shell amplitudes from the cubic interaction vertex. The general scheme of the reconstruction of a covariant closed string field theory from the off-shell amplitudes is discussed within the path integral framework. (author). 30 refs

  2. Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping

    Directory of Open Access Journals (Sweden)

    Suxing Liu

    2017-09-01

    Full Text Available Accurate high-resolution three-dimensional (3D models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand. Our method is based on the structure from motion method, with a SIFT image feature descriptor. In order to improve the quality of the 3D models, we segmented the plant objects based on the PlantCV platform. We also deducted the optimal number of images needed for reconstructing a high-quality model. Experiments showed that an accurate 3D model of the plant was successfully could be reconstructed by our approach. This 3D surface model reconstruction system provides a simple and accurate computational platform for non-destructive, plant phenotyping.

  3. Progress towards a semiconductor Compton camera for prompt gamma imaging during proton beam therapy for range and dose verification

    Science.gov (United States)

    Gutierrez, A.; Baker, C.; Boston, H.; Chung, S.; Judson, D. S.; Kacperek, A.; Le Crom, B.; Moss, R.; Royle, G.; Speller, R.; Boston, A. J.

    2018-01-01

    The main objective of this work is to test a new semiconductor Compton camera for prompt gamma imaging. Our device is composed of three active layers: a Si(Li) detector as a scatterer and two high purity Germanium detectors as absorbers of high-energy gamma rays. We performed Monte Carlo simulations using the Geant4 toolkit to characterise the expected gamma field during proton beam therapy and have made experimental measurements of the gamma spectrum with a 60 MeV passive scattering beam irradiating a phantom. In this proceeding, we describe the status of the Compton camera and present the first preliminary measurements with radioactive sources and their corresponding reconstructed images.

  4. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  5. Development of a tomographic system adapted to 3D measurement of contaminated wounds based on the Cacao concept (Computer aided collimation Gamma Camera); Developpement a partir du concept CACAO (Camera A Collimation Assistee par Ordinateur) d'un systeme tomographique adapte a la mesure 3D de plaies contaminees

    Energy Technology Data Exchange (ETDEWEB)

    Douiri, A

    2002-03-01

    The computer aided collimation gamma camera (CACAO in French) is a gamma camera using a collimator with large holes, a supplementary linear scanning motion during the acquisition and a dedicated reconstruction program taking full account of the source depth. The CACAO system was introduced to improve both the sensitivity and the resolution in nuclear medicine. This thesis focuses on the design of a fast and robust reconstruction algorithm in the CACAO project. We start by an overview of tomographic imaging techniques in nuclear medicine. After modelling the physical CACAO system, we present the complete reconstruction program which involves three steps: 1) shift and sum 2) deconvolution and filtering 3) rotation and sum. The deconvolution is the critical step that decreases the signal to noise ratio of the reconstructed images. We propose a regularized multi-channel algorithm to solve the deconvolution problem. We also present a fast algorithm based on Splines functions and preserving the high quality of the reconstructed images for the shift and the rotation steps. Comparisons of simulated reconstructed images in 2D and 3D for the conventional system (CPHC) and CACAO demonstrate the ability of CACAO system to increase the quality of the SPECT images. Finally, this study concludes with an experimental approach with a pixellated detector conceived for a 3D measurement of contaminated wounds. This experimentation proves the possible advantages of coupling the CACAO project with pixellated detectors. Moreover, a variety of applications could fully benefit from the CACAO system, such as low activity imaging, the use of high-energy gamma isotopes and the visualization of deep organs. Moreover the combination of the CACAO system with a pixels detector may open up further possibilities for the future of nuclear medicine. (author)

  6. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  7. A stacked CdTe pixel detector for a compton camera

    International Nuclear Information System (INIS)

    Oonuki, Kousuke; Tanaka, Takaaki; Watanabe, Shin; Takeda, Shin'ichiro; Nakazawa, Kazuhiro; Ushio, Masayoshi; Mitani, Takefumi; Takahashi, Tadayuki; Tajima, Hiroyasu

    2007-01-01

    We are developing a semiconductor Compton telescope to explore the universe in the energy band from several tens of keV to a few MeV. A detector material of combined Si strip and CdTe pixel is used to cover the energy range around 60keV. For energies above several hundred keV, in contrast, the higher detection efficiency of CdTe semiconductor in comparison with Si is expected to play an important role as both an absorber and a scatterer. In order to demonstrate the spectral and imaging capability of a CdTe-based Compton camera, we developed a Compton telescope consisting of a stack of CdTe pixel detectors as a small scale prototype. With this prototype, we succeeded in reconstructing images and spectra by solving the Compton kinematics within the energy band from 122 to 662keV. The energy resolution (FWHM) of reconstructed spectra is 7.3keV at 511keV. The angular resolution obtained at 511keV is measured to be 12.2 deg. (FWHM)

  8. ComPath: comparative enzyme analysis and annotation in pathway/subsystem contexts

    Directory of Open Access Journals (Sweden)

    Kim Sun

    2008-03-01

    Full Text Available Abstract Background Once a new genome is sequenced, one of the important questions is to determine the presence and absence of biological pathways. Analysis of biological pathways in a genome is a complicated task since a number of biological entities are involved in pathways and biological pathways in different organisms are not identical. Computational pathway identification and analysis thus involves a number of computational tools and databases and typically done in comparison with pathways in other organisms. This computational requirement is much beyond the capability of biologists, so information systems for reconstructing, annotating, and analyzing biological pathways are much needed. We introduce a new comparative pathway analysis workbench, ComPath, which integrates various resources and computational tools using an interactive spreadsheet-style web interface for reliable pathway analyses. Results ComPath allows users to compare biological pathways in multiple genomes using a spreadsheet style web interface where various sequence-based analysis can be performed either to compare enzymes (e.g. sequence clustering and pathways (e.g. pathway hole identification, to search a genome for de novo prediction of enzymes, or to annotate a genome in comparison with reference genomes of choice. To fill in pathway holes or make de novo enzyme predictions, multiple computational methods such as FASTA, Whole-HMM, CSR-HMM (a method of our own introduced in this paper, and PDB-domain search are integrated in ComPath. Our experiments show that FASTA and CSR-HMM search methods generally outperform Whole-HMM and PDB-domain search methods in terms of sensitivity, but FASTA search performs poorly in terms of specificity, detecting more false positive as E-value cutoff increases. Overall, CSR-HMM search method performs best in terms of both sensitivity and specificity. Gene neighborhood and pathway neighborhood (global network visualization tools can be used

  9. Automatic detection of patient position for incorporation in exact 3D reconstruction for emission tomography

    International Nuclear Information System (INIS)

    Kyme, A.; Hutton, B.; Hatton, R.; Skerrett, D.

    2000-01-01

    Full text: SPECT involves acquiring a set of projection images using one or more rotating gamma cameras. The projections are then reconstructed to create transverse slices. Patient motion during the scan can introduce inconsistencies into the data leading to artifacts. There remains a need for robust and effective motion correction. One approach uses the (corrupt) data itself to derive the patient position at each projection angle. Corrected data is periodically incorporated into a 3-D reconstruction. Fundamental aspects of the algorithm mechanics, particularly performance in the presence of Poisson noise, have been examined. Brain SPECT studies were simulated using a digital version of the Huffman brain phantom. Projection datasets with Poisson noise imposed, generated for different positions of the phantom, were combined and reconstructed to produce motion-corrupted reconstructions. To examine the behaviour of the cost function as the object position was changed, the corrupted re-construction was forward projected and the mean square difference (MSD) between the resulting re-projections and corresponding original projections was calculated. The ability to detect mis-positioned projections for different degrees of freedom, the importance of using dual-head camera geometry, and the effect of smoothing the original projections prior to the MSD calculation were assessed. Re-projection of the corrupt reconstruction was able to correctly identify mis-positioned projection data. The degree of movement as defined by MSD was more easily identified for translations than for rotations. Noise resulted in an increasing bias that made it difficult to distinguish the minimum MSD, particularly for z-axis rotations. This was improved by median filtering of projections. Right-angled dual-head geometry is necessary to provide stability to the algorithm and to better identify motion in all 6 degrees of freedom. These findings will assist the optimisation of a fully automated motion

  10. Analyzing octopus movements using three-dimensional reconstruction.

    Science.gov (United States)

    Yekutieli, Yoram; Mitelman, Rea; Hochner, Binyamin; Flash, Tamar

    2007-09-01

    Octopus arms, as well as other muscular hydrostats, are characterized by a very large number of degrees of freedom and a rich motion repertoire. Over the years, several attempts have been made to elucidate the interplay between the biomechanics of these organs and their control systems. Recent developments in electrophysiological recordings from both the arms and brains of behaving octopuses mark significant progress in this direction. The next stage is relating these recordings to the octopus arm movements, which requires an accurate and reliable method of movement description and analysis. Here we describe a semiautomatic computerized system for 3D reconstruction of an octopus arm during motion. It consists of two digital video cameras and a PC computer running custom-made software. The system overcomes the difficulty of extracting the motion of smooth, nonrigid objects in poor viewing conditions. Some of the trouble is explained by the problem of light refraction in recording underwater motion. Here we use both experiments and simulations to analyze the refraction problem and show that accurate reconstruction is possible. We have used this system successfully to reconstruct different types of octopus arm movements, such as reaching and bend initiation movements. Our system is noninvasive and does not require attaching any artificial markers to the octopus arm. It may therefore be of more general use in reconstructing other nonrigid, elongated objects in motion.

  11. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    Science.gov (United States)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  12. IMAGE-BASED RECONSTRUCTION AND ANALYSIS OF DYNAMIC SCENES IN A LANDSLIDE SIMULATION FACILITY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2017-12-01

    Full Text Available The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  13. Caveat emptor: limitations of the automated reconstruction of metabolic pathways in Plasmodium.

    Science.gov (United States)

    Ginsburg, Hagai

    2009-01-01

    The functional reconstruction of metabolic pathways from an annotated genome is a tedious and demanding enterprise. Automation of this endeavor using bioinformatics algorithms could cope with the ever-increasing number of sequenced genomes and accelerate the process. Here, the manual reconstruction of metabolic pathways in the functional genomic database of Plasmodium falciparum--Malaria Parasite Metabolic Pathways--is described and compared with pathways generated automatically as they appear in PlasmoCyc, metaSHARK and the Kyoto Encyclopedia for Genes and Genomes. A critical evaluation of this comparison discloses that the automatic reconstruction of pathways generates manifold paths that need an expert manual verification to accept some and reject most others based on manually curated gene annotation.

  14. Biologically inspired EM image alignment and neural reconstruction.

    Science.gov (United States)

    Knowles-Barley, Seymour; Butcher, Nancy J; Meinertzhagen, Ian A; Armstrong, J Douglas

    2011-08-15

    Three-dimensional reconstruction of consecutive serial-section transmission electron microscopy (ssTEM) images of neural tissue currently requires many hours of manual tracing and annotation. Several computational techniques have already been applied to ssTEM images to facilitate 3D reconstruction and ease this burden. Here, we present an alternative computational approach for ssTEM image analysis. We have used biologically inspired receptive fields as a basis for a ridge detection algorithm to identify cell membranes, synaptic contacts and mitochondria. Detected line segments are used to improve alignment between consecutive images and we have joined small segments of membrane into cell surfaces using a dynamic programming algorithm similar to the Needleman-Wunsch and Smith-Waterman DNA sequence alignment procedures. A shortest path-based approach has been used to close edges and achieve image segmentation. Partial reconstructions were automatically generated and used as a basis for semi-automatic reconstruction of neural tissue. The accuracy of partial reconstructions was evaluated and 96% of membrane could be identified at the cost of 13% false positive detections. An open-source reference implementation is available in the Supplementary information. seymour.kb@ed.ac.uk; douglas.armstrong@ed.ac.uk Supplementary data are available at Bioinformatics online.

  15. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  16. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  17. Control quality and performance measurement of gamma cameras. S.F.P.M. report nr 28. Updating of S.F.P.H. reports Performance assessment and quality control of scintillation cameras: plane mode (1992), tomographic mode (1996) whole-body mode (1997)

    International Nuclear Information System (INIS)

    Petegnief, Yolande; Barrau, Corinne; Coulot, Jeremy; Guilhem, Marie Therese; Hapdey, Sebastien; Vrigneaud, Jean-Marc; Metayer, Yann; Picone, Magali; Ricard, Marcel; Salvat, Cecile; Bouchet, Francis; Ferrer, Ludovic; Murat, Caroline

    2012-01-01

    This report aims at providing students and professionals with a comprehensive guide related to quality control and to performance measurement on gamma cameras. It completes and updates three previous reports published by the SFPM during the 1990's related to the different acquisition modes for this modality of medical imagery: plane imagery, whole-body scanning, and tomography. The authors present the operation principle of scintillation cameras, the characteristics of a scintillation camera, analytic and algebraic algorithms of tomographic reconstruction, and the various software corrections applied in mono-photonic imagery (corrections of the attenuation effect, of the scattering effect, of the collimator response effect, and of the partial volume effect). In the next part, the present the various characteristics, parameters and issues related to performance measurement for the three addressed modes (plane, whole body, tomographic). The last part presents various aspects of the organisation of quality control and of performance follow-up: regulatory context, reference documents, internal quality control program

  18. A computational geometry framework for the optimisation of atom probe reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Felfer, Peter [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); Institute for General Materials Properties, Department of Materials Science, Friedrich-Alexander University Erlangen-Nürnberg, 91058 Erlangen (Germany); Cairney, Julie [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia)

    2016-10-15

    In this paper, we present pathways for improving the reconstruction of atom probe data on a coarse (>10 nm) scale, based on computational geometry. We introduce a way to iteratively improve an atom probe reconstruction by adjusting it, so that certain known shape criteria are fulfilled. This is achieved by creating an implicit approximation of the reconstruction through a barycentric coordinate transform. We demonstrate the application of these techniques to the compensation of trajectory aberrations and the iterative improvement of the reconstruction of a dataset containing a grain boundary. We also present a method for obtaining a hull of the dataset in both detector and reconstruction space. This maximises data utilisation, and can be used to compensate for ion trajectory aberrations caused by residual fields in the ion flight path through a ‘master curve’ and correct for overall shape deviations in the data. - Highlights: • An atom probe reconstruction can be iteratively improved by using shape constraints. • An atom probe reconstruction can be inverted using barycentric coordinate transforms. • Hulls for atom probe datasets can be obtained from 2D detector outlines that are co-reconstructed with the data. • Ion trajectory compressions caused by instrument-specific residual fields in the drift tube can be corrected.

  19. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  20. Path Dependency

    OpenAIRE

    Mark Setterfield

    2015-01-01

    Path dependency is defined, and three different specific concepts of path dependency – cumulative causation, lock in, and hysteresis – are analyzed. The relationships between path dependency and equilibrium, and path dependency and fundamental uncertainty are also discussed. Finally, a typology of dynamical systems is developed to clarify these relationships.

  1. 3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion.

    Science.gov (United States)

    Zhang, Yu; Ye, Mao; Manocha, Dinesh; Yang, Ruigang

    2017-07-06

    We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or simple parametric surfaces. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

  2. Hubble Space Telescope, Faint Object Camera

    Science.gov (United States)

    1981-01-01

    This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.

  3. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  4. Born iterative reconstruction using perturbed-phase field estimates.

    Science.gov (United States)

    Astheimer, Jeffrey P; Waag, Robert C

    2008-10-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.

  5. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    Energy Technology Data Exchange (ETDEWEB)

    Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es

    2010-04-07

    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

  6. Radial polar histogram: obstacle avoidance and path planning for robotic cognition and motion control

    Science.gov (United States)

    Wang, Po-Jen; Keyawa, Nicholas R.; Euler, Craig

    2012-01-01

    In order to achieve highly accurate motion control and path planning for a mobile robot, an obstacle avoidance algorithm that provided a desired instantaneous turning radius and velocity was generated. This type of obstacle avoidance algorithm, which has been implemented in California State University Northridge's Intelligent Ground Vehicle (IGV), is known as Radial Polar Histogram (RPH). The RPH algorithm utilizes raw data in the form of a polar histogram that is read from a Laser Range Finder (LRF) and a camera. A desired open block is determined from the raw data utilizing a navigational heading and an elliptical approximation. The left and right most radii are determined from the calculated edges of the open block and provide the range of possible radial paths the IGV can travel through. In addition, the calculated obstacle edge positions allow the IGV to recognize complex obstacle arrangements and to slow down accordingly. A radial path optimization function calculates the best radial path between the left and right most radii and is sent to motion control for speed determination. Overall, the RPH algorithm allows the IGV to autonomously travel at average speeds of 3mph while avoiding all obstacles, with a processing time of approximately 10ms.

  7. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates

  8. Imaging performance of a multiwire proportional-chamber positron camera

    International Nuclear Information System (INIS)

    Perez-Mandez, V.; Del Guerra, A.; Nelson, W.R.; Tam, K.C.

    1982-08-01

    A new design - fully three-dimensional - Positron Camera is presented, made of six MultiWire Proportional Chamber modules arranged to form the lateral surface of a hexagonal prism. A true coincidence rate of 56000 c/s is expected with an equal accidental rate for a 400 μCi activity uniformly distributed in a approx. 3 l water phantom. A detailed Monte Carlo program has been used to investigate the dependence of the spatial resolution on the geometrical and physical parameters. A spatial resolution of 4.8 mm FWHM has been obtained for a 18 F point-like source in a 10 cm radius water phantom. The main properties of the limited angle reconstruction algorithms are described in relation to the proposed detector geometry

  9. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.

  10. Ted Irving and the Arc of APW Paths

    Science.gov (United States)

    Kent, D. V.

    2014-12-01

    Ted Irving's last two published papers neatly encapsulate his seminal contributions to the delineation of ever-important apparent polar wander (APW) paths. His final (210th) paper [Creer & Irving, 2012 Earth Sciences History] describes in detail how Ken Creer and he when still graduate students at Cambridge started to generate and assemble paleomagnetic data for the first APW path, for then only the UK; the paper was published 60 years ago and happened to be Ted's first [Creer, Irving & Runcorn, 1954 JGE]. Only 10 years later, there was already a lengthy reference list of paleomagnetic results available from most continents that had been compiled in pole lists he published in GJRAS from 1960 to 1965 and included in an appendix in his landmark book "Paleomagnetism" [Irving, 1964 Wiley] in support of wide ranging discussions of continental drift and related topics in chapters like 'Paleolatitudes and paleomeridians.' A subsequent innovation was calculating running means of poles indexed to a numerical geologic time scale [Irving, 1977 Nature], which with independent tectonic reconstructions as already for Gondwana allowed constructions of more detailed composite APW paths. His 1977 paper also coined Pangea B for an earlier albeit contentious configuration for the supercontinent that refuses to go away. Gliding over much work on APW tracks and hairpins in the Precambrian, we come to Ted's penultimate (209th) paper [Kent & Irving, 2010 JGR] in which individual poles from short-lived large igneous provinces were grouped and most sedimentary poles, many rather venerable, excluded as likely to be biased by variable degrees of inclination error. The leaner composite APW path helped to resurrect the Baja BC scenario of Cordilleran terrane motions virtually stopped in the 1980s by APW path techniques that relied on a few key but alas often badly skewed poles. The new composite APW path also revealed several major features, such as a huge polar shift of 30° in 15 Myr in the

  11. Quivers of Bound Path Algebras and Bound Path Coalgebras

    Directory of Open Access Journals (Sweden)

    Dr. Intan Muchtadi

    2010-09-01

    Full Text Available bras and coalgebras can be represented as quiver (directed graph, and from quiver we can construct algebras and coalgebras called path algebras and path coalgebras. In this paper we show that the quiver of a bound path coalgebra (resp. algebra is the dual quiver of its bound path algebra (resp. coalgebra.

  12. Fractional path planning and path tracking

    International Nuclear Information System (INIS)

    Melchior, P.; Jallouli-Khlif, R.; Metoui, B.

    2011-01-01

    This paper presents the main results of the application of fractional approach in path planning and path tracking. A new robust path planning design for mobile robot was studied in dynamic environment. The normalized attractive force applied to the robot is based on a fictitious fractional attractive potential. This method allows to obtain robust path planning despite robot mass variation. The danger level of each obstacles is characterized by the fractional order of the repulsive potential of the obstacles. Under these conditions, the robot dynamic behavior was studied by analyzing its X - Y path planning with dynamic target or dynamic obstacles. The case of simultaneously mobile obstacles and target is also considered. The influence of the robot mass variation is studied and the robustness analysis of the obtained path shows the robustness improvement due to the non integer order properties. Pre shaping approach is used to reduce system vibration in motion control. Desired systems inputs are altered so that the system finishes the requested move without residual vibration. This technique, developed by N.C. Singer and W.P.Seering, is used for flexible structure control, particularly in the aerospace field. In a previous work, this method was extended for explicit fractional derivative systems and applied to second generation CRONE control, the robustness was also studied. CRONE (the French acronym of C ommande Robuste d'Ordre Non Entier ) control system design is a frequency-domain based methodology using complex fractional integration.

  13. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  14. Development of a tomographic system adapted to 3D measurement of contaminated wounds based on the Cacao concept (Computer aided collimation Gamma Camera)

    International Nuclear Information System (INIS)

    Douiri, A.

    2002-03-01

    The computer aided collimation gamma camera (CACAO in French) is a gamma camera using a collimator with large holes, a supplementary linear scanning motion during the acquisition and a dedicated reconstruction program taking full account of the source depth. The CACAO system was introduced to improve both the sensitivity and the resolution in nuclear medicine. This thesis focuses on the design of a fast and robust reconstruction algorithm in the CACAO project. We start by an overview of tomographic imaging techniques in nuclear medicine. After modelling the physical CACAO system, we present the complete reconstruction program which involves three steps: 1) shift and sum 2) deconvolution and filtering 3) rotation and sum. The deconvolution is the critical step that decreases the signal to noise ratio of the reconstructed images. We propose a regularized multi-channel algorithm to solve the deconvolution problem. We also present a fast algorithm based on Splines functions and preserving the high quality of the reconstructed images for the shift and the rotation steps. Comparisons of simulated reconstructed images in 2D and 3D for the conventional system (CPHC) and CACAO demonstrate the ability of CACAO system to increase the quality of the SPECT images. Finally, this study concludes with an experimental approach with a pixellated detector conceived for a 3D measurement of contaminated wounds. This experimentation proves the possible advantages of coupling the CACAO project with pixellated detectors. Moreover, a variety of applications could fully benefit from the CACAO system, such as low activity imaging, the use of high-energy gamma isotopes and the visualization of deep organs. Moreover the combination of the CACAO system with a pixels detector may open up further possibilities for the future of nuclear medicine. (author)

  15. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  16. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  17. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  18. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  19. Robotic Online Path Planning on Point Cloud.

    Science.gov (United States)

    Liu, Ming

    2016-05-01

    This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.

  20. Application of iterative reconstruction in dynamic studies

    International Nuclear Information System (INIS)

    Meikle, S.R.

    1998-01-01

    Full text: The conventional approach to analysing dynamic tomographic data (SPECT or PET) is to reconstruct projections corresponding to each time interval separately and then fit a suitable tracer kinetic model to the dynamic sequence (method 1 ) . This approach assumes that the tracer distribution remains static during any given time interval and, for practical reasons, filtered back-projection (FBP) is the preferred reconstruction algorithm. However, alternative approaches exist which lend themselves to iterative algorithms, such as EM. One approach is to fit the model directly to the projection data, followed by EM reconstruction of the parameter estimates (method 2). This requires that the tracer model can be expressed as a linear function of the unknown model parameters. A third alternative is to incorporate the tracer model into the reconstruction algorithm (method 3). Such an extension was described during the early development of the EM algorithm, referred to as the EM parametric image reconstruction algorithm (EM-PIRA). We have investigated these various strategies for analysing dynamic data and their relative pros and cons. Tracer modelling was performed using a general model, referred to as spectral analysis, which makes no restriction on the number of physiological compartments and satisfies the linearity requirement of method 2. A kinetic software phantom was created and used to test the convergence and noise properties of the different approaches. In summary, method 2 is the most practical as it reduces the number of reconstructions by at least an order of magnitude and provides improved signal-to-noise ratios compared with method 1. EM-PIRA allows greater flexibility in the choice of parametric images and appears to have a regularising effect on convergence. Methods 2 and 3 are also better suited to dynamic scanning with a rotating camera, as they can potentially account for changes in tracer distribution between projections

  1. The assessment of postural control and the influence of a secondary task in people with anterior cruciate ligament reconstructed knees using a Nintendo Wii Balance Board.

    Science.gov (United States)

    Howells, Brooke E; Clark, Ross A; Ardern, Clare L; Bryant, Adam L; Feller, Julian A; Whitehead, Timothy S; Webster, Kate E

    2013-09-01

    Postural control impairments may persist following anterior cruciate ligament (ACL) reconstruction. The effect of a secondary task on postural control has, however, not been determined. The purpose of this case-control study was to compare postural control in patients following ACL reconstruction with healthy individuals with and without a secondary task. 45 patients (30 men and 15 women) participated at least 6 months following primary ACL reconstruction surgery. Participants were individually matched by age, gender and sports activity to healthy controls. Postural control was measured using a Nintendo Wii Balance Board and customised software during static single-leg stance and with the addition of a secondary task. The secondary task required participants to match the movement of an oscillating marker by adducting and abducting their arm. Centre of pressure (CoP) path length in both medial-lateral and anterior-posterior directions, and CoP total path length. When compared with the control group, the anterior-posterior path length significantly increased in the ACL reconstruction patients' operated (12.3%, p=0.02) and non-operated limbs (12.8%, p=0.02) for the single-task condition, and the non-operated limb (11.5%, p=0.006) for the secondary task condition. The addition of a secondary task significantly increased CoP path lengths in all measures (pcontrol groups. ACL reconstruction patients showed a reduced ability in both limbs to control the movement of the body in the anterior-posterior direction. The secondary task affected postural control by comparable amounts in patients after ACL reconstruction and healthy controls. Devices for the objective measurement of postural control, such as the one used in this study, may help clinicians to more accurately identify patients with deficits who may benefit from targeted neuromuscular training programs.

  2. BrachyView: proof-of-principle of a novel in-body gamma camera for low dose-rate prostate brachytherapy.

    Science.gov (United States)

    Petasecca, M; Loo, K J; Safavi-Naeini, M; Han, Z; Metcalfe, P E; Meikle, S; Pospisil, S; Jakubek, J; Bucci, J A; Zaider, M; Lerch, M L F; Qi, Y; Rosenfeld, A B

    2013-04-01

    The conformity of the achieved dose distribution to the treatment plan strongly correlates with the accuracy of seed implantation in a prostate brachytherapy treatment procedure. Incorrect seed placement leads to both short and long term complications, including urethral and rectal toxicity. The authors present BrachyView, a novel concept of a fast intraoperative treatment planning system, to provide real-time seed placement information based on in-body gamma camera data. BrachyView combines the high spatial resolution of a pixellated silicon detector (Medipix2) with the volumetric information acquired by a transrectal ultrasound (TRUS). The two systems will be embedded in the same probe so as to provide anatomically correct seed positions for intraoperative planning and postimplant dosimetry. Dosimetric calculations are based on the TG-43 method using the real position of the seeds. The purpose of this paper is to demonstrate the feasibility of BrachyView using the Medipix2 pixel detector and a pinhole collimator to reconstruct the real-time 3D position of low dose-rate brachytherapy seeds in a phantom. BrachyView incorporates three Medipix2 detectors coupled to a multipinhole collimator. Three-dimensionally triangulated seed positions from multiple planar images are used to determine the seed placement in a PMMA prostate phantom in real time. MATLAB codes were used to test the reconstruction method and to optimize the device geometry. The results presented in this paper show a 3D position reconstruction accuracy of the seed in the range of 0.5-3 mm for a 10-60 mm seed-to-detector distance interval (Z direction), respectively. The BrachyView system also demonstrates a spatial resolution of 0.25 mm in the XY plane for sources at 10 mm distance from Medipix2 detector plane, comparable to the theoretical value calculated for an equivalent gamma camera arrangement. The authors successfully demonstrated the capability of BrachyView for real-time imaging (using a 3 s

  3. BrachyView: Proof-of-principle of a novel in-body gamma camera for low dose-rate prostate brachytherapy

    International Nuclear Information System (INIS)

    Petasecca, M.; Loo, K. J.; Safavi-Naeini, M.; Han, Z.; Metcalfe, P. E.; Lerch, M. L. F.; Qi, Y.; Rosenfeld, A. B.; Meikle, S.; Pospisil, S.; Jakubek, J.; Bucci, J. A.; Zaider, M.

    2013-01-01

    Purpose: The conformity of the achieved dose distribution to the treatment plan strongly correlates with the accuracy of seed implantation in a prostate brachytherapy treatment procedure. Incorrect seed placement leads to both short and long term complications, including urethral and rectal toxicity. The authors present BrachyView, a novel concept of a fast intraoperative treatment planning system, to provide real-time seed placement information based on in-body gamma camera data. BrachyView combines the high spatial resolution of a pixellated silicon detector (Medipix2) with the volumetric information acquired by a transrectal ultrasound (TRUS). The two systems will be embedded in the same probe so as to provide anatomically correct seed positions for intraoperative planning and postimplant dosimetry. Dosimetric calculations are based on the TG-43 method using the real position of the seeds. The purpose of this paper is to demonstrate the feasibility of BrachyView using the Medipix2 pixel detector and a pinhole collimator to reconstruct the real-time 3D position of low dose-rate brachytherapy seeds in a phantom. Methods: BrachyView incorporates three Medipix2 detectors coupled to a multipinhole collimator. Three-dimensionally triangulated seed positions from multiple planar images are used to determine the seed placement in a PMMA prostate phantom in real time. MATLAB codes were used to test the reconstruction method and to optimize the device geometry. Results: The results presented in this paper show a 3D position reconstruction accuracy of the seed in the range of 0.5–3 mm for a 10–60 mm seed-to-detector distance interval (Z direction), respectively. The BrachyView system also demonstrates a spatial resolution of 0.25 mm in the XY plane for sources at 10 mm distance from Medipix2 detector plane, comparable to the theoretical value calculated for an equivalent gamma camera arrangement. The authors successfully demonstrated the capability of BrachyView for

  4. Observations of the Perseids 2013 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km

  5. Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

    International Nuclear Information System (INIS)

    Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.

    1985-01-01

    Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter

  6. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  7. A reconstruction algorithms for helical cone-beam SPECT

    International Nuclear Information System (INIS)

    Weng, Y.; Zeng, G.L.; Gullberg, G.T.

    1993-01-01

    Cone-beam SPECT provides improved sensitivity for imaging small organs like the brain and heart. However, current cone-beam tomography with the focal point traversing a planar orbit does not acquire sufficient data to give an accurate reconstruction. In this paper, the authors employ a data-acquisition method which obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix surrounding the patient. An implementation of Grangeat's algorithm for helical cone-beam projections is developed. The algorithm requires a rebinning step to convert cone-beam data to parallel-beam data which are then reconstructed using the 3D Radon inversion. A fast new rebinning scheme is developed which uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. This algorithm is shown to produce less artifacts than the commonly used Feldkamp algorithm when applied to either a circular planar orbit or a helical orbit acquisition. The algorithm can easily be extended to any arbitrary orbit

  8. SU-E-T-574: Fessiblity of Using the Calypso System for HDR Interstitial Catheter Reconstruction

    International Nuclear Information System (INIS)

    Li, J S; Ma, C

    2014-01-01

    Purpose: It is always a challenge to reconstruct the interstitial catheter for high dose rate (HDR) brachytherapy on patient CT or MR images. This work aims to investigate the feasibility of using the Calypso system (Varian Medical, CA) for HDR catheter reconstruction utilizing its accuracy on tracking the electromagnetic transponder location. Methods: Experiment was done with a phantom that has a HDR interstitial catheter embedded inside. CT scan with a slice thickness of 1.25 mm was taken for this phantom with two Calypso beacon transponders in the catheter. The two transponders were connected with a wire. The Calypso system was used to record the beacon transponders’ location in real time when they were gently pulled out with the wire. The initial locations of the beacon transponders were used for registration with the CT image and the detected transponder locations were used for the catheter path reconstruction. The reconstructed catheter path was validated on the CT image. Results: The HDR interstitial catheter was successfully reconstructed based on the transponders’ coordinates recorded by the Calypso system in real time when the transponders were pulled in the catheter. After registration with the CT image, the shape and location of the reconstructed catheter are evaluated against the CT image and the result shows an accuracy of 2 mm anywhere in the Calypso detectable region which is within a 10 cm X 10 cm X 10 cm cubic box for the current system. Conclusion: It is feasible to use the Calypso system for HDR interstitial catheter reconstruction. The obstacle for its clinical usage is the size of the beacon transponder whose diameter is bigger than most of the interstitial catheters used in clinic. Developing smaller transponders and supporting software and hardware for this application is necessary before it can be adopted for clinical use

  9. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    Science.gov (United States)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  10. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  11. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  12. Re-constructing historical Adélie penguin abundance estimates by retrospectively accounting for detection bias.

    Science.gov (United States)

    Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul

    2015-01-01

    Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.

  13. Re-constructing historical Adélie penguin abundance estimates by retrospectively accounting for detection bias.

    Directory of Open Access Journals (Sweden)

    Colin Southwell

    Full Text Available Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.

  14. Use of 3D reconstruction to correct for patient motion in SPECT

    International Nuclear Information System (INIS)

    Fulton, R.R.; Hutton, B.F.; Braun, M.; Ardekani, B.; Larkin, R.

    1994-01-01

    Patient motion occurring during data acquisition in single photon emission computed tomography (SPECT) can cause serious reconstruction artefacts. We have developed a new approach to correct for head motion in brain SPECT. Prior to motion, projections are assigned to conventional projections. When head motion occurs, it is measured by a motion monitoring system, and subsequent projection data are mapped 'virtual' projections. The appropriate position of each virtual projection is determined by applying the converse of the patient's accumulated motion to the actual camera projection. Conventional and virtual projections, taken together, form a consistent set that can be reconstructed using a three-dimensional (3D) algorithm. The technique has been tested on a range of simulated rotational movements, both within and out of the transaxial plane. For all simulated movements, the motion corrected images exhibited better agreement with a motion free reconstruction than did the uncorrected images. (Author)

  15. A photogrammetry-based system for 3D surface reconstruction of prosthetics and orthotics.

    Science.gov (United States)

    Li, Guang-kun; Gao, Fan; Wang, Zhi-gang

    2011-01-01

    The objective of this study is to develop an innovative close range digital photogrammetry (CRDP) system using the commercial digital SLR cameras to measure and reconstruct the 3D surface of prosthetics and orthotics. This paper describes the instrumentation, techniques and preliminary results of the proposed system. The technique works by taking pictures of the object from multiple view angles. The series of pictures were post-processed via feature point extraction, point match and 3D surface reconstruction. In comparison with the traditional method such as laser scanning, the major advantages of our instrument include the lower cost, compact and easy-to-use hardware, satisfactory measurement accuracy, and significantly less measurement time. Besides its potential applications in prosthetics and orthotics surface measurement, the simple setup and its ease of use will make it suitable for various 3D surface reconstructions.

  16. Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies

    Science.gov (United States)

    Knyaz, Vladimir A.

    2002-04-01

    An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.

  17. Medieval Settlement Formation in Catalonia: Villages, their Territories and communication paths

    Directory of Open Access Journals (Sweden)

    Jordi BOLÒS

    2014-04-01

    Full Text Available This study focuses its attention on Catalonia and points to the importance of using several literary sources as a means of identifying the main characteristics of Catalan settlements throughout the Early Middle Ages (6th-10th Centuries. Apart from the need to use written and archaeological documents, the study highlights the importance of understanding and interpreting place-names and of reconstructing landscape history. Special emphasis is placed on the interest of interpreting by means of consulting documents, maps and orthophotomaps as witnesses that allow us to know the boundaries of the Early Medieval settlements. At the centre of these boundaries stand several small population centres (hamlets and a church. Several agricultural territories of various villages are reconstructed. Likewise, the study relates population with communication paths, churches and necropolis of the Early Middle Ages.

  18. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  19. Electro-optical system for the high speed reconstruction of computed tomography images

    International Nuclear Information System (INIS)

    Tresp, V.

    1989-01-01

    An electro-optical system for the high-speed reconstruction of computed tomography (CT) images has been built and studied. The system is capable of reconstructing high-contrast and high-resolution images at video rate (30 images per second), which is more than two orders of magnitude faster than the reconstruction rate achieved by special purpose digital computers used in commercial CT systems. The filtered back-projection algorithm which was implemented in the reconstruction system requires the filtering of all projections with a prescribed filter function. A space-integrating acousto-optical convolver, a surface acoustic wave filter and a digital finite-impulse response filter were used for this purpose and their performances were compared. The second part of the reconstruction, the back projection of the filtered projections, is computationally very expensive. An optical back projector has been built which maps the filtered projections onto the two-dimensional image space using an anamorphic lens system and a prism image rotator. The reconstructed image is viewed by a video camera, routed through a real-time image-enhancement system, and displayed on a TV monitor. The system reconstructs parallel-beam projection data, and in a modified version, is also capable of reconstructing fan-beam projection data. This extension is important since the latter are the kind of projection data actually acquired in high-speed X-ray CT scanners. The reconstruction system was tested by reconstructing precomputed projection data of phantom images. These were stored in a special purpose projection memory and transmitted to the reconstruction system as an electronic signal. In this way, a projection measurement system that acquires projections sequentially was simulated

  20. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  1. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  2. Value of coincidence gamma camera PET for diagnosing head and neck tumors: functional imaging and image coregistration

    International Nuclear Information System (INIS)

    Dresel, S.; Brinkbaeumer, K.; Schmid, R.; Hahn, K.

    2001-01-01

    54 patients suffering from head and neck tumors (30 m, 24 f, age: 32-67 years) were examined using dedicated PET and coincidence gamma camera PET after injection of 185-350 MBq [ 18 F]FDG. Examinations were carried out on the dedicated PET first (Siemens ECAT Exact HR+) followed by a scan on the coincidence gamma camera PET (Picker Prism 2000 XP-PCD, Marconi Axis g-PET 2 AZ). Dedicated PET was acquired in 3D mode, coincidence gamma camera PET was performed in list mode using an axial filter. Reconstruction of data was performed iteratively on both, dedicated PET and coincidence gamma camera PET. All patients received a CT scan in multislice technique (Siemens Somatom Plus 4, Marconi MX 8000). Image coregistration was performed on an Odyssey workstation (Marconi). All findings have been verified by the gold standard histology or in case of negative histology by follow-up. Results: Using dedicated PET the primary or recurrent lesion was correctly diagnosed in 47/48 patients, using coincidence gamma camera PET in 46/48 patients and using CT in 25/48 patients. Metastatic disease in cervical lymph nodes was diagnosed in 17/18 patients with dedicated PET, in 16/18 patients with coincidence gamma camera PET and in 15/18 with CT. False-positive results with regard to lymph node metastasis were seen with one patient for dedicated PET and hybrid PET, respectively, and with 18 patients for CT. In a total of 11 patients unknown metastatic lesions were seen with dedicated PET and with coincidence gamma camera PET elsewhere in the body (lung: n = 7, bone: n = 3, liver: n = 1). Additional malignant disease other than the head and neck tumor was found in 4 patients. (orig.) [de

  3. Feynman's path integrals and Bohm's particle paths

    International Nuclear Information System (INIS)

    Tumulka, Roderich

    2005-01-01

    Both Bohmian mechanics, a version of quantum mechanics with trajectories, and Feynman's path integral formalism have something to do with particle paths in space and time. The question thus arises how the two ideas relate to each other. In short, the answer is, path integrals provide a re-formulation of Schroedinger's equation, which is half of the defining equations of Bohmian mechanics. I try to give a clear and concise description of the various aspects of the situation. (letters and comments)

  4. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    Science.gov (United States)

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  5. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  6. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    Science.gov (United States)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  7. Multi-Dimensional Path Queries

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    1998-01-01

    to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments......We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...

  8. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    Science.gov (United States)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  9. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    Science.gov (United States)

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (palgorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, palgorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Polygonal-path approximations on the path spaces of quantum-mechanical systems: properties of the polygonal paths

    International Nuclear Information System (INIS)

    Exner, P.; Kolerov, G.I.

    1981-01-01

    Properties of the subset of polygonal paths in the Hilbert space H of paths referring to a d-dimensional quantum-mechanical system are examined. Using the reproduction kernel technique we prove that each element of H is approximated by polygonal paths uniformly with respect to the ''norm'' of time-interval partitions. This result will be applied in the second part of the present paper to prove consistency of the uniform polygonal-path extension of the Feynman maps [ru

  11. Stochastic microstructure characterization and reconstruction via supervised learning

    International Nuclear Information System (INIS)

    Bostanabad, Ramin; Bui, Anh Tuan; Xie, Wei; Apley, Daniel W.; Chen, Wei

    2016-01-01

    Microstructure characterization and reconstruction have become indispensable parts of computational materials science. The main contribution of this paper is to introduce a general methodology for practical and efficient characterization and reconstruction of stochastic microstructures based on supervised learning. The methodology is general in that it can be applied to a broad range of microstructures (clustered, porous, and anisotropic). By treating the digitized microstructure image as a set of training data, we generically learn the stochastic nature of the microstructure via fitting a supervised learning model to it (we focus on classification trees). The fitted supervised learning model provides an implicit characterization of the joint distribution of the collection of pixel phases in the image. Based on this characterization, we propose two different approaches to efficiently reconstruct any number of statistically equivalent microstructure samples. We test the approach on five examples and show that the spatial dependencies within the microstructures are well preserved, as evaluated via correlation and lineal-path functions. The main advantages of our approach stem from having a compact empirically-learned model that characterizes the stochastic nature of the microstructure, which not only makes reconstruction more computationally efficient than existing methods, but also provides insight into morphological complexity.

  12. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    Science.gov (United States)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  13. Hanford Environmental Dose Reconstruction Project monthly report, August 1992

    International Nuclear Information System (INIS)

    McMakin, A.H.; Cannon, S.D.; Finch, S.M.

    1992-01-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography; food consumption; and agriculture; and environmental pathway and dose estimates

  14. Interaction mean free path measurements for relativistic heavy ion fragments using CR39 plastic track detectors

    International Nuclear Information System (INIS)

    Drechsel, H.; Brechtmann, C.; Dreute, J.; Sonntag, S.; Trakowski, W.; Beer, J.; Heinrich, W.

    1984-01-01

    This paper describes an experiment measuring the interaction mean free paths for charge changing nuclear collisions of relativistic heavy ion fragments. We use a stack of CR39 plastic nuclear track detectors that was irradiated with 1.8 GeV/nucleon 40 Ar ions at the Berkeley Bevalac. About 1.5 x 10 7 etch cones were measured in this experiment using an automatic measuring system. By tracing the etch cones over successive plastic foils the particle trajectories in the stack were reconstructed. For 14185 trajectories with 6444 nuclear collisions of fragments with charge 9-15 the interaction mean free path in the plastic was determined. (orig.)

  15. Using a Thermal Imaging Camera to Locate Perforators on the Lower Limb

    Directory of Open Access Journals (Sweden)

    Sharad P. Paul

    2017-05-01

    Full Text Available Reconstruction of the lower limb presents a complex problem after skin cancer surgery, as proximity of skin and bone present vascular and technical challenges. Studies on vascular anatomy have confirmed that the vascular plane on the lower limb lies deep to the deep fascia. Yet, many flaps are routinely raised superficial to this plane and therefore flap failure rates in the lower limb are high. Fascio-cutaneous flaps based on perforators offer a better cosmetic alternative to skin grafts. In this paper, we detail use of a thermal imaging camera to identify perforator ‘compartments’ that can help in designing such flaps.

  16. Reconstructing Global-scale Ionospheric Outflow With a Satellite Constellation

    Science.gov (United States)

    Liemohn, M. W.; Welling, D. T.; Jahn, J. M.; Valek, P. W.; Elliott, H. A.; Ilie, R.; Khazanov, G. V.; Glocer, A.; Ganushkina, N. Y.; Zou, S.

    2017-12-01

    The question of how many satellites it would take to accurately map the spatial distribution of ionospheric outflow is addressed in this study. Given an outflow spatial map, this image is then reconstructed from a limited number virtual satellite pass extractions from the original values. An assessment is conducted of the goodness of fit as a function of number of satellites in the reconstruction, placement of the satellite trajectories relative to the polar cap and auroral oval, season and universal time (i.e., dipole tilt relative to the Sun), geomagnetic activity level, and interpolation technique. It is found that the accuracy of the reconstructions increases sharply from one to a few satellites, but then improves only marginally with additional spacecraft beyond 4. Increased dwell time of the satellite trajectories in the auroral zone improves the reconstruction, therefore a high-but-not-exactly-polar orbit is most effective for this task. Local time coverage is also an important factor, shifting the auroral zone to different locations relative to the virtual satellite orbit paths. The expansion and contraction of the polar cap and auroral zone with geomagnetic activity influences the coverage of the key outflow regions, with different optimal orbit configurations for each level of activity. Finally, it is found that reconstructing each magnetic latitude band individually produces a better fit to the original image than 2-D image reconstruction method (e.g., triangulation). A high-latitude, high-altitude constellation mission concept is presented that achieves acceptably accurate outflow reconstructions.

  17. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  18. Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2010-02-01

    We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.

  19. Optical wedge method for spatial reconstruction of particle trajectories

    International Nuclear Information System (INIS)

    Asatiani, T.L.; Alchudzhyan, S.V.; Gazaryan, K.A.; Zograbyan, D.Sh.; Kozliner, L.I.; Krishchyan, V.M.; Martirosyan, G.S.; Ter-Antonyan, S.V.

    1978-01-01

    A technique of optical wedges allowing the full reconstruction of pictures of events in space is considered. The technique is used for the detection of particle tracks in optical wide-gap spark chambers by photographing in one projection. The optical wedges are refracting right-angle plastic prisms positioned between the camera and the spark chamber so that through them both ends of the track are photographed. A method for calibrating measurements is given, and an estimate made of the accuracy of the determination of the second projection with the help of the optical wedges

  20. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  1. Image reconstruction from projections and its application in emission computer tomography

    International Nuclear Information System (INIS)

    Kuba, Attila; Csernay, Laszlo

    1989-01-01

    Computer tomography is an imaging technique for producing cross sectional images by reconstruction from projections. Its two main branches are called transmission and emission computer tomography, TCT and ECT, resp. After an overview of the theory and practice of TCT and ECT, the first Hungarian ECT type MB 9300 SPECT consisting of a gamma camera and Ketronic Medax N computer is described, and its applications to radiological patient observations are discussed briefly. (R.P.) 28 refs.; 4 figs

  2. A novel PET camera calibration method

    International Nuclear Information System (INIS)

    Yerian, K.; Hartz, R.K.; Gaeta, J.M.; Marani, S.; Wong, W.H.; Bristow, D.; Mullani, N.A.

    1985-01-01

    Reconstructed time-of-flight PET images must be corrected for differences in the sensitivity of detector pairs, variations in the TOF gain between groups of detector pairs, and for shifts in the detector-pair timing windows. These calibration values are measured for each detector-pair coincidence line using a positron emitting ring source. The quality of the measured value for a detector pair depends on its statistics. To improve statistics, algorithms are developed which derive individual detector calibration values for efficiency, TOF offsets, and TOF fwhm from the raw detector-pair measurements. For the author's current TOFPET system there are 162,000 detector pairs which are reduced to 720 individual detector values. The data for individual detectors are subsequently recombined, improving the statistical quality of the resultant detector-pair values. In addition, storage requirements are significantly reduced by saving the individual detector values. These parameters are automatically evaluated on a routine basis and problem detectors reported for adjustment or replacement. Decomposing the detector-pair measurements into individual detector values significantly improves the calibration values used to correct camera artifacts in PET imaging

  3. Comparison of 3d Reconstruction Services and Terrestrial Laser Scanning for Cultural Heritage Documentation

    Science.gov (United States)

    Rasztovits, S.; Dorninger, P.

    2013-07-01

    Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.

  4. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  5. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  6. 3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images

    Science.gov (United States)

    Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom

    Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.

  7. Capturing complex human behaviors in representative sports contexts with a single camera.

    Science.gov (United States)

    Duarte, Ricardo; Araújo, Duarte; Fernandes, Orlando; Fonseca, Cristina; Correia, Vanda; Gazimba, Vítor; Travassos, Bruno; Esteves, Pedro; Vilar, Luís; Lopes, José

    2010-01-01

    In the last years, several motion analysis methods have been developed without considering representative contexts for sports performance. The purpose of this paper was to explain and underscore a straightforward method to measure human behavior in these contexts. Procedures combining manual video tracking (with TACTO device) and bidimensional reconstruction (through direct linear transformation) using a single camera were used in order to capture kinematic data required to compute collective variable(s) and control parameter(s). These procedures were applied to a 1vs1 association football task as an illustrative subphase of team sports and will be presented in a tutorial fashion. Preliminary analysis of distance and velocity data identified a collective variable (difference between the distance of the attacker and the defender to a target defensive area) and two nested control parameters (interpersonal distance and relative velocity). Findings demonstrated that the complementary use of TACTO software and direct linear transformation permit to capture and reconstruct complex human actions in their context in a low dimensional space (information reduction).

  8. Improved 3D reconstruction in smart-room environments using ToF imaging

    DEFF Research Database (Denmark)

    Guðmundsson, Sigurjón Árni; Pardas, Montse; Casas, Josep R.

    2010-01-01

    This paper presents the use of Time-of-Flight (ToF) cameras in smart-rooms and how this leads to improved results in segmenting the people in the room from the background and consequently better 3D reconstruction of foreground objects. A calibrated rig consisting of one Swissranger SR3100 Time-of...... of eliminating regional artifacts and therefore creating a more robust input for higher level applications such as people tracking or human motion analysis....

  9. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  10. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  11. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  12. Near Real-Time Ground-to-Ground Infrared Remote-Sensing Combination and Inexpensive Visible Camera Observations Applied to Tomographic Stack Emission Measurements

    Directory of Open Access Journals (Sweden)

    Philippe de Donato

    2018-04-01

    Full Text Available Evaluation of the environmental impact of gas plumes from stack emissions at the local level requires precise knowledge of the spatial development of the cloud, its evolution over time, and quantitative analysis of each gaseous component. With extensive developments, remote-sensing ground-based technologies are becoming increasingly relevant to such an application. The difficulty of determining the exact 3-D thickness of the gas plume in real time has meant that the various gas components are mainly expressed using correlation coefficients of gas occurrences and path concentration (ppm.m. This paper focuses on a synchronous and non-expensive multi-angled approach combining three high-resolution visible cameras (GoPro-Hero3 and a scanning infrared (IR gas system (SIGIS, Bruker. Measurements are performed at a NH3 emissive industrial site (NOVACARB Society, Laneuveville-devant-Nancy, France. Visible data images were processed by a first geometrical reconstruction gOcad® protocol to build a 3-D envelope of the gas plume which allows estimation of the plume’s thickness corresponding to the 2-D infrared grid measurements. NH3 concentration data could thereby be expressed in ppm and have been interpolated using a second gOcad® interpolation algorithm allowing a precise volume visualization of the NH3 distribution in the flue gas steam.

  13. Design of Microwave Camera for Breast Cancer Detection

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy

    2008-01-01

    is then used to reconstruct an image, which consists of a spatial distribution of the complex permittivity in the imaging domain. Using this image the cancer tissue can be detected due to its dielectric property contrast compared to normal tissue. The instrument employs a multichannel high sensitive...... superheterodyne architecture, enabling parallel coherent measurements. In this way, mechanical scanning, which is commonly used in measurements of an electromagnetic field distribution, is avoided. The system presented is the first reported 3D microwave breast imaging camera with parallel signal detection....... The hardware operates in the frequency range 0.3 – 3 GHz. The noise floor is below -140 dBm over the bandwidth of the system. The dynamic range depends on the available incident power range and is limited by the channel to channel isolation of 140 dB. The work presented in this thesis encompasses a wide range...

  14. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    Science.gov (United States)

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  15. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  16. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  17. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  18. New method to analyse internal disruptions with five-camera soft x-ray tomography on RTP

    Energy Technology Data Exchange (ETDEWEB)

    Tanzi, C.P. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands); Blank, H.J. de [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany)

    1994-12-31

    The five-camera soft x-ray diagnostic on the Rijnhuizen Tokamak Project (RTP) offers a wealth of information on sawteeth. Using four or five cameras, tomographic images with 7 poloidal harmonics have been obtained throughout sawtooth crashes and precursor oscillations. The purpose of this paper is to determine whether the precursors are ideal MHD modes or can be attributed to the resistive growth of a magnetic island. In practice, the detection of the topology of magnetic surfaces from the reconstructed tomographic images is complicated by the fact that (except during the final phase of the collapse) the time dependence is dominated by rotation of the m = 1 displacement. A novel method allows to define quantities, e.g. the plasma volume where the emissivity is within a certain range, whose change is only determined by cross-field transport or reconnection, and is not affected by m = 1 convection and by rotation. (author) 6 refs., 2 figs.

  19. New method to analyse internal disruptions with five-camera soft x-ray tomography on RTP

    International Nuclear Information System (INIS)

    Tanzi, C.P.; Blank, H.J. de

    1994-01-01

    The five-camera soft x-ray diagnostic on the Rijnhuizen Tokamak Project (RTP) offers a wealth of information on sawteeth. Using four or five cameras, tomographic images with 7 poloidal harmonics have been obtained throughout sawtooth crashes and precursor oscillations. The purpose of this paper is to determine whether the precursors are ideal MHD modes or can be attributed to the resistive growth of a magnetic island. In practice, the detection of the topology of magnetic surfaces from the reconstructed tomographic images is complicated by the fact that (except during the final phase of the collapse) the time dependence is dominated by rotation of the m = 1 displacement. A novel method allows to define quantities, e.g. the plasma volume where the emissivity is within a certain range, whose change is only determined by cross-field transport or reconnection, and is not affected by m = 1 convection and by rotation. (author) 6 refs., 2 figs

  20. Pulled Motzkin paths

    International Nuclear Information System (INIS)

    Janse van Rensburg, E J

    2010-01-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  1. Pulled Motzkin paths

    Energy Technology Data Exchange (ETDEWEB)

    Janse van Rensburg, E J, E-mail: rensburg@yorku.c [Department of Mathematics and Statistics, York University, Toronto, ON, M3J 1P3 (Canada)

    2010-08-20

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) {yields} f as f {yields} {infinity}, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) {yields} 2f as f {yields} {infinity}, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  2. Pulled Motzkin paths

    Science.gov (United States)

    Janse van Rensburg, E. J.

    2010-08-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  3. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  4. DG TOMO: A new method for tomographic reconstruction

    International Nuclear Information System (INIS)

    Freitas, D. de; Feschet, F.; Cachin, F.; Geissler, B.; Bapt, A.; Karidioula, I.; Martin, C.; Kelly, A.; Mestas, D.; Gerard, Y.; Reveilles, J.P.; Maublant, J.

    2006-01-01

    Aim: FBP and OSEM are the most popular tomographic reconstruction methods in scintigraphy. FBP is a simple method but artifacts of reconstruction are generated which corrections induce degradation of the spatial resolution. OSEM takes account of statistical fluctuations but noise strongly increases after a certain number of iterations. We compare a new method of tomographic reconstruction based on discrete geometry (DG TOMO) to FBP and OSEM. Materials and methods: Acquisitions were performed on a three-head gamma-camera (Philips) with a NEMA Phantom containing six spheres of sizes from 10 to 37 mm inner diameter, filled with around 325 MBq/l of technetium-99 m. The spheres were positioned in water containing 3 MBq/l of technetium-99 m. Acquisitions were realized during a 180 o -rotation around the phantom by 25-s steps. DG TOMO has been developed in our laboratory in order to minimize the number of projections at acquisition. Two tomographic reconstructions utilizing 32 and 16 projections with FBP, OSEM and DG TOMO were performed and transverse slices were compared. Results: FBP with 32 projections detects only the activity in the three largest spheres (diameter ≥22 mm). With 16 projections, the star effect is predominant and the contrast of the third sphere is very low. OSEM with 32 projections provides a better image but the three smallest spheres (diameter ≤17 mm) are difficult to distinguish. With 16 projections, the three smaller spheres are not detectable. The results of DG TOMO are similar to OSEM. Conclusion: Since the parameters of DG TOMO can be further optimized, this method appears as a promising alternative for tomoscintigraphy reconstruction

  5. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  6. Integration of real-time 3D capture, reconstruction, and light-field display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  7. An efficient simultaneous reconstruction technique for tomographic particle image velocimetry

    Science.gov (United States)

    Atkinson, Callum; Soria, Julio

    2009-10-01

    To date, Tomo-PIV has involved the use of the multiplicative algebraic reconstruction technique (MART), where the intensity of each 3D voxel is iteratively corrected to satisfy one recorded projection, or pixel intensity, at a time. This results in reconstruction times of multiple hours for each velocity field and requires considerable computer memory in order to store the associated weighting coefficients and intensity values for each point in the volume. In this paper, a rapid and less memory intensive reconstruction algorithm is presented based on a multiplicative line-of-sight (MLOS) estimation that determines possible particle locations in the volume, followed by simultaneous iterative correction. Reconstructions of simulated images are presented for two simultaneous algorithms (SART and SMART) as well as the now standard MART algorithm, which indicate that the same accuracy as MART can be achieved 5.5 times faster or 77 times faster with 15 times less memory if the processing and storage of the weighting matrix is considered. Application of MLOS-SMART and MART to a turbulent boundary layer at Re θ = 2200 using a 4 camera Tomo-PIV system with a volume of 1,000 × 1,000 × 160 voxels is discussed. Results indicate improvements in reconstruction speed of 15 times that of MART with precalculated weighting matrix, or 65 times if calculation of the weighting matrix is considered. Furthermore the memory needed to store a large weighting matrix and volume intensity is reduced by almost 40 times in this case.

  8. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  9. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  10. Path Expressions

    Science.gov (United States)

    1975-06-01

    Traditionally, synchronization of concurrent processes is coded in line by operations on semaphores or similar objects. Path expressions move the...discussion about a variety of synchronization primitives . An analysis of their relative power is found in [3]. Path expressions do not introduce yet...another synchronization primitive . A path expression relates to such primitives as a for- or while-statement of an ALGOL-like language relates to a JUMP

  11. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    Science.gov (United States)

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  12. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  13. Hanford Environmental Dose Reconstruction Project, Quarterly report, September--November 1993

    International Nuclear Information System (INIS)

    Cannon, S.D.; Finch, S.M.

    1993-01-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates); Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates

  14. TESTING THE LOW-COST RPAS POTENTIAL IN 3D CULTURAL HERITAGE RECONSTRUCTION

    OpenAIRE

    M. Bolognesi; A. Furini; V. Russo; A. Pellegrinelli; P. Russo

    2015-01-01

    In order to analyze the potential as well as the limitations of low-cost RPAS photogrammetric systems for architectural cultural heritage reconstruction, some tests were performed by a small RPAS equipped with an ultralight camera. The tests were carried out in a site of remarkable historical interest. A great amount of images were taken with camera’s optical axis in vertical and oblique position. Images were processed by the commercial software PhotoScan of Agisoft and numerous mode...

  15. The eye of the camera: effects of security cameras on pro-social behavior

    NARCIS (Netherlands)

    van Rompay, T.J.L.; Vonk, D.J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  16. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    Science.gov (United States)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  17. Local self-similarity descriptor for point-of-interest reconstruction of real-world scenes

    International Nuclear Information System (INIS)

    Gao, Xianglu; Wan, Weibing; Zhao, Qunfei; Zhang, Xianmin

    2015-01-01

    Scene reconstruction is utilized commonly in close-range photogrammetry, with diverse applications in fields such as industry, biology, and aerospace industries. Presented surfaces or wireframe three-dimensional (3D) model reconstruction applications are either too complex or too inflexible to accommodate various types of real-world scenes, however. This paper proposes an algorithm for acquiring point-of-interest (referred to throughout the study as POI) coordinates in 3D space, based on multi-view geometry and a local self-similarity descriptor. After reconstructing several POIs specified by a user, a concise and flexible target object measurement method, which obtains the distance between POIs, is described in detail. The proposed technique is able to measure targets with high accuracy even in the presence of obstacles and non-Lambertian surfaces. The method is so flexible that target objects can be measured with a handheld digital camera. Experimental results further demonstrate the effectiveness of the algorithm. (paper)

  18. A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

    Directory of Open Access Journals (Sweden)

    M. Hassanein

    2016-06-01

    Full Text Available In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated

  19. SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, S; Uesaka, M [The University of Tokyo, Tokyo (Japan); Nishio, T; Tsuneda, M [Hiroshima University, Hiroshima (Japan); Matsushita, K [Rikkyo University, Tokyo (Japan); Kabuki, S [Tokai University, Isehara (Japan)

    2016-06-15

    Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value of the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.

  20. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  1. Three-Dimensional Reconstruction of a Gas Bubble Trajectory in Liquid

    Directory of Open Access Journals (Sweden)

    Augustyniak Jakub

    2014-01-01

    Full Text Available The identification of the shape of the bubble trajectory is crucial for understanding the mechanism of bubble motion in liquid. In the paper it has been presented the technique of 3D bubble trajectory reconstruction using a single high speed camera and the system of mirrors. In the experiment a glass tank filled with distilled water was used. The nozzle through which the bubbles were generated was placed in the centre of the tank. The movement of the bubbles was recorded with a high speed camera, the Phantom v1610 at a 600 fps. The techniques of image analysis has been applied to determine the coordinates of mass centre of each bubble image. The 3D trajectory of bubble can be obtained by using triangulation methods. In the paper the measurement error of imaging computer tomography has been estimated. The maximum measurement error was equal to ±0,65 [mm]. Trajectories of subsequently departing bubbles were visualized.

  2. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    Science.gov (United States)

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  3. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  4. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  5. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  6. Deflection tomography of a complex flow field based on the visualization of projection array

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Bin; Miao Zhanli, E-mail: zb-sh@163.com [College of Electromechanical Engineering, Qingdao University of Science and Technology, Qingdao, Shandong 266061 (China)

    2011-02-01

    Tomographic techniques are used for the investigation of complex flow fields by means of deflectometric methods. A new deflection tomographic setup for obtaining an array of multidirectional deflectograms is presented. Deflection projections in different angles of view can be captured synchronously in same optical path condition and arranged on the camera in two rows with three views in each row. Tikhonov regularization method is used to reconstruct temperature distribution from deflectometric projection data. The conjugate gradient method is used to compute the regularized solution for the least-square equations. The asymmetric flame temperature distribution in the horizontal section was reconstructed from limited view angle projections. The experimental results of reconstruction from real projection data were satisfactory when compared with the direct thermocouple measurements.

  7. Zero-Slack, Noncritical Paths

    Science.gov (United States)

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  8. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  9. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  10. A path integral for heavy-quarks in a hot plasma

    CERN Document Server

    Beraudo, A.; Faccioli, P.; Garberoglio, G.; 10.1016/j.nuclphysa.2010.06.007

    2010-01-01

    We propose a model for the propagation of a heavy-quark in a hot plasma, to be viewed as a first step towards a full description of the dynamics of heavy quark systems in a quark-gluon plasma, including bound state formation. The heavy quark is treated as a non relativistic particle interacting with a fluctuating field, whose correlator is determined by a hard thermal loop approximation. This approximation, which concerns only the medium in which the heavy quark propagates, is the only one that is made, and it can be improved. The dynamics of the heavy quark is given exactly by a quantum mechanical path integral that is calculated in this paper in the Euclidean space-time using numerical Monte Carlo techniques. The spectral function of the heavy quark in the medium is then reconstructed using a Maximum Entropy Method. The path integral is also evaluated exactly in the case where the mass of the heavy quark is infinite; one then recovers known results concerning the complex optical potential that controls the ...

  11. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix

    2014-11-19

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  12. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix; Egiazarian, Karen; Kautz, Jan; Pulli, Kari; Steinberger, Markus; Tsai, Yun-Ta; Rouf, Mushfiqur; Pająk, Dawid; Reddy, Dikpal; Gallo, Orazio; Liu, Jing; Heidrich, Wolfgang

    2014-01-01

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  13. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens)

    International Nuclear Information System (INIS)

    Robert, Charlotte; Montemont, Guillaume; Rebuffel, Veronique; Guerin, Lucie; Verger, Loick; Buvat, Irene

    2010-01-01

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging.

  14. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    Science.gov (United States)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2010-04-01

    OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.

  15. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  16. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  17. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  18. Hanford Environmental Dose Reconstruction Project monthly report

    International Nuclear Information System (INIS)

    Finch, S.M.

    1990-12-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have been have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; and environmental pathways and dose estimates. 3 figs., 3 tabs

  19. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    Science.gov (United States)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  20. GPS tomography. Validation of reconstructed 3-D humidity fields with radiosonde profiles

    Energy Technology Data Exchange (ETDEWEB)

    Shangguan, M.; Bender, M.; Ramatschi, M.; Dick, G.; Wickert, J. [Helmholtz Centre Potsdam, German Research Centre for Geosciences (GFZ), Potsdam (Germany); Raabe, A. [Leipzig Institute for Meteorology (LIM), Leipzig (Germany); Galas, R. [Technische Univ. Berlin (Germany). Dept. for Geodesy and Geoinformation Sciences

    2013-11-01

    Water vapor plays an important role in meteorological applications; GeoForschungsZentrum (GFZ) therefore developed a tomographic system to derive 3-D distributions of the tropospheric water vapor above Germany using GPS data from about 300 ground stations. Input data for the tomographic reconstructions are generated by the Earth Parameter and Orbit determination System (EPOS) software of the GFZ, which provides zenith total delay (ZTD), integrated water vapor (IWV) and slant total delay (STD) data operationally with a temporal resolution of 2.5 min (STD) and 15 min (ZTD, IWV). The water vapor distribution in the atmosphere is derived by tomographic reconstruction techniques. The quality of the solution is dependent on many factors such as the spatial coverage of the atmosphere with slant paths, the spatial distribution of their intersections and the accuracy of the input observations. Independent observations are required to validate the tomographic reconstructions and to get precise information on the accuracy of the derived 3-D water vapor fields. To determine the quality of the GPS tomography, more than 8000 vertical water vapor profiles at 13 German radiosonde stations were used for the comparison. The radiosondes were launched twice a day (at 00:00 UTC and 12:00 UTC) in 2007. In this paper, parameters of the entire profiles such as the wet refractivity, and the zenith wet delay have been compared. Before the validation the temporal and spatial distribution of the slant paths, serving as a basis for tomographic reconstruction, as well as their angular distribution were studied. The mean wet refractivity differences between tomography and radiosonde data for all points vary from -1.3 to 0.3, and the root mean square is within the range of 6.5-9. About 32% of 6803 profiles match well, 23% match badly and 45% are difficult to classify as they match only in parts.

  1. Tomographic Small-Animal Imaging Using a High-Resolution Semiconductor Camera

    Science.gov (United States)

    Kastis, GA; Wu, MC; Balzer, SJ; Wilson, DW; Furenlid, LR; Stevenson, G; Barber, HB; Barrett, HH; Woolfenden, JM; Kelly, P; Appleby, M

    2015-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 x 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm x 5.3 cm x 3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm x 2.5 cm x 0.15 cm slab of CdZnTe connected to a 64 x 64 multiplexer readout via indium-bump bonding. The collimator is 7 mm thick, with a 0.38 mm pitch that matches the detector pixel pitch. We obtained a series of projections by rotating the object in front of the camera. The axis of rotation was vertical and about 1.5 cm away from the collimator face. Mouse holders were made out of acrylic plastic tubing to facilitate rotation and the administration of gas anesthetic. Acquisition times were varied from 60 sec to 90 sec per image for a total of 60 projections at an equal spacing of 6 degrees between projections. We present tomographic images of a line phantom and mouse bone scan and assess the properties of the system. The reconstructed images demonstrate spatial resolution on the order of 1–2 mm. PMID:26568676

  2. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    Directory of Open Access Journals (Sweden)

    Jun Tan

    2014-05-01

    Full Text Available Curb detection is an essential component of Autonomous Land Vehicles (ALV, especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb’s geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  3. Stochastic Reconstruction and Interpolation of Precipitation Fields Using Combined Information of Commercial Microwave Links and Rain Gauges

    Science.gov (United States)

    Haese, B.; Hörning, S.; Chwala, C.; Bárdossy, A.; Schalge, B.; Kunstmann, H.

    2017-12-01

    For the reconstruction and interpolation of precipitation fields, we present the application of a stochastic approach called Random Mixing. Generated fields are based on a data set consisting of rain gauge observations and path-averaged rain rates estimated using Commercial Microwave Link (CML) derived information. Precipitation fields are received as linear combination of unconditional spatial random fields, where the spatial dependence structure is described by copulas. The weights of the linear combination are optimized such that the observations and the spatial structure of the precipitation observations are reproduced. The innovation of the approach is that this strategy enables the simulation of ensembles of precipitation fields of any size. Each ensemble member is in concordance with the observed path-averaged CML derived rain rates and additionally reflects the observed rainfall variability along the CML paths. The ensemble spread allows additionally an estimation of the uncertainty of the reconstructed precipitation fields. The method is demonstrated both for a synthetic data set and a real-world data set in South Germany. While the synthetic example allows an evaluation against a known reference, the second example demonstrates the applicability for real-world observations. Generated precipitation fields of both examples reproduce the spatial precipitation pattern in good quality. A performance evaluation of Random Mixing compared to Ordinary Kriging demonstrates an improvement of the reconstruction of the observed spatial variability. Random Mixing is concluded to be a beneficial new approach for the provision of precipitation fields and ensembles of them, in particular when different measurement types are combined.

  4. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  5. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  6. Adaptive Pulsed Laser Line Extraction for Terrain Reconstruction using a Dynamic Vision Sensor

    Directory of Open Access Journals (Sweden)

    Christian eBrandli

    2014-01-01

    Full Text Available Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor’s ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500Hz were achieved using a line laser of 3mW at a distance of 45cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2mm.

  7. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  8. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  9. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  10. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  11. Path Creation, Path Dependence and Breaking Away from the Path: Re-Examining the Case of Nokia

    OpenAIRE

    Wang, Jens; Hedman, Jonas; Tuunainen, Virpi Kristiina

    2016-01-01

    The explanation of how and why firms succeed or fail is a recurrent research challenge. This is particularly important in the context of technological innovations. We focus on the role of historical events and decisions in explaining such success and failure. Using a case study of Nokia, we develop and extend a multi-layer path dependence framework. We identify four layers of path dependence: technical, strategic and leadership, organizational, and external collaboration. We show how path dep...

  12. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  13. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  14. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  15. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  16. Path-dependent functions

    International Nuclear Information System (INIS)

    Khrapko, R.I.

    1985-01-01

    A uniform description of various path-dependent functions is presented with the help of expansion of the type of the Taylor series. So called ''path-integrals'' and ''path-tensor'' are introduced which are systems of many-component quantities whose values are defined for arbitrary paths in coordinated region of space in such a way that they contain a complete information on the path. These constructions are considered as elementary path-dependent functions and are used instead of power monomials in the usual Taylor series. Coefficients of such an expansion are interpreted as partial derivatives dependent on the order of the differentiations or else as nonstandard cavariant derivatives called two-point derivatives. Some examples of pathdependent functions are presented.Space curvature tensor is considered whose geometrica properties are determined by the (non-transitive) translator of parallel transport of a general type. Covariant operation leading to the ''extension'' of tensor fiels is pointed out

  17. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  18. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  19. RELATIVE AND ABSOLUTE CALIBRATION OF A MULTIHEAD CAMERA SYSTEM WITH OBLIQUE AND NADIR LOOKING CAMERAS FOR A UAS

    Directory of Open Access Journals (Sweden)

    F. Niemeyer

    2013-08-01

    Full Text Available Numerous unmanned aerial systems (UAS are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis“ software and will give an overview of the results and experiences of test flights.

  20. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  1. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  2. DMSA SPECT imaging using oblique reconstruction in a paediatric population - benefits and technical considerations

    International Nuclear Information System (INIS)

    Parsons, G.; Ford, M.; Crisp, J.; Bernard, E.; Howman-Giles, R.

    1997-01-01

    Full text: DMSA renal scans are frequently requested for the diagnosis and follow-up of acute pyelonephritis and cortical scarring. This study was designed to:- 1. evaluate oblique reconstruction of DMSA SPECT over standard plane reconstruction and planar imaging; and 2. report on the technical aspects important in obtaining high quality DMSA SPECT, particularly in neonates. Over seven months, 210/231 (91 %) of DMSA scans were performed with SPECT on children from age nine days to 16 years, the median age being 2.5 years. 65 patients (31 %) were under one year and 39 (18%) were under six months. Planar and SPECT imaging with standard plane reconstruction and oblique reorientation was performed on the Siemens triple-headed gamma camera. High quality SPECT images were obtained on the smallest babies using a paediatric palette, and were of comparable quality to those of older children. At the time of reporting, the nuclear medicine physician assessed the diagnostic value of the three types of date presented: (1) planar images; (2) standard plane SPECT reconstruction; and (3) oblique SPECT reconstruction. Cortical defects were identified separately for upper, middle and lower poles. Three physicians concluded that high quality SPECT is superior to planar images when assessing the renal cortex. In addition, oblique reorientation is superior to standard reconstruction, particularly at the upper and lower poles. SPECT is now performed routinely on patients of all ages, and the oblique sagittal and coronal reorientation is now used in place of the standard reconstruction

  3. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  4. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  5. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  6. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    McMakin, A.H.; Cannon, S.D.; Finch, S.M.

    1992-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon, Washington, and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates): Source terms, environmental transport, environmental monitoring data, demography, food consumption, and agriculture, and environmental pathways and dose estimates. Progress is discussed

  7. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  8. Regional cerebral blood flow measurement using N-isopropyl-p-[123I] iodoamphetamine and rotating gamma camera emission computed tomography

    International Nuclear Information System (INIS)

    Matsuda, Hiroshi; Seki, Hiroyasu; Ishida, Hiroko

    1985-01-01

    Thirty-one regional cerebral blood flow (rCBF) measurements were performed on 26 patients with cerebrovascular accidents using N-Isopropyl-p-[ 123 I] Iodoamphetamine ( 123 I-IMP) and rotating gamma camera emission computed tomography (ECT). The equation for determining rCBF is as follows: F=100.R.Cb/(N.A), where F is rCBF in ml/100 g/min., R is the constant withdrawal rate of arterial blood in ml/min., Cb is the brain activity concentration in μCi/g, A is the total activity (5 min.) in the withdrawal arterial whole blood in μCi and N is the fraction of A that is true tracer activity (0.75). In determining Cb at 5 min. after injection, reconstructed counts from 35 min. to 59 min. were corrected to represent those from 4 min. to 5 min. with the use of time activity curve for the entire brain immediately after injection to 30 min. Reconstructed counts of central region in tomographic image were corrected 118% of the obtained values from the result of the countingrate ratio between peripheral and central regions of interests obtained from phantom study. Brain mean blood flow values were distributed from 11 to 39 ml/100 g/min. In 119 cortical regions obtained from 11 measurements in 9 patients, there was a significant correlation (r=0.41, p 123 I-IMP and rotating gamma camera ECT and those from 133 Xe inhalation method. rCBF measurement using 123 I-IMP and rotating gamma camera ECT is not only relatively noninvasive measurement for the entire brain but also three-dimensional evaluation. Besides, it is superior in spatial resolution and accuracy to conventional 133 Xe clearance method. (author)

  9. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    Science.gov (United States)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  10. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  11. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  12. Technical quality assessment of an optoelectronic system for movement analysis

    International Nuclear Information System (INIS)

    Sapienza University of Rome (Italy))" data-affiliation=" (Department of Mechanical and Aerospace Engineering, Sapienza University of Rome (Italy))" >Di Marco, R; Sapienza University of Rome (Italy))" data-affiliation=" (Department of Mechanical and Aerospace Engineering, Sapienza University of Rome (Italy))" >Patanè, F; Sapienza University of Rome (Italy))" data-affiliation=" (Department of Mechanical and Aerospace Engineering, Sapienza University of Rome (Italy))" >Cappa, P; Rossi, S

    2015-01-01

    The Optoelectronic Systems (OS) are largely used in gait analysis to evaluate the motor performances of healthy subjects and patients. The accuracy of marker trajectories reconstruction depends on several aspects: the number of cameras, the dimension and position of the calibration volume, and the chosen calibration procedure. In this paper we propose a methodology to evaluate the effects of the mentioned sources of error on the reconstruction of marker trajectories. The novel contribution of the present work consists in the dimension of the tested calibration volumes, which is comparable with the ones normally used in gait analysis; in addition, to simulate trajectories during clinical gait analysis, we provide non-default paths for markers as inputs. Several calibration procedures are implemented and the same trial is processed with each calibration file, also considering different cameras configurations. The RMSEs between the measured trajectories and the optimal ones are calculated for each comparison. To investigate the significant differences between the computed indices, an ANOVA analysis is implemented. The RMSE is sensible to the variations of the considered calibration volume and the camera configurations and it is always inferior to 43 mm

  13. Errors and Uncertainties in Dose Reconstruction for Radiation Effects Research

    Energy Technology Data Exchange (ETDEWEB)

    Strom, Daniel J.

    2008-04-14

    Dose reconstruction for studies of the health effects of ionizing radiation have been carried out for many decades. Major studies have included Japanese bomb survivors, atomic veterans, downwinders of the Nevada Test Site and Hanford, underground uranium miners, and populations of nuclear workers. For such studies to be credible, significant effort must be put into applying the best science to reconstructing unbiased absorbed doses to tissues and organs as a function of time. In many cases, more and more sophisticated dose reconstruction methods have been developed as studies progressed. For the example of the Japanese bomb survivors, the dose surrogate “distance from the hypocenter” was replaced by slant range, and then by TD65 doses, DS86 doses, and more recently DS02 doses. Over the years, it has become increasingly clear that an equal level of effort must be expended on the quantitative assessment of uncertainty in such doses, and to reducing and managing uncertainty. In this context, this paper reviews difficulties in terminology, explores the nature of Berkson and classical uncertainties in dose reconstruction through examples, and proposes a path forward for Joint Coordinating Committee for Radiation Effects Research (JCCRER) Project 2.4 that requires a reasonably small level of effort for DOSES-2008.

  14. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  15. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  16. STAR reconstruction improvements for tracking with the heavy flavor tracker

    Science.gov (United States)

    Webb, Jason C.; Lauret, Jérôme; Perevotchikov, Victor; Smirnov, Dmitri; Van Buren, Gene

    2017-10-01

    The reconstruction and identification of charmed hadron decays provides an important tool for the study of heavy quark behavior in the Quark Gluon Plasma. Such measurements require high resolution to topologically identify decay daughters at vertices displaced demands on track reconstruction software. To enable these measurements at RHIC, the STAR experiment has designed and employed the Heavy Flavor Tracker (HFT). It is composed of silicon-based tracking detectors, providing four layers of high-precision position measurements which are used in combination with hits from the Time Projection Chamber (TPC) to reconstruct track candidates. The STAR integrated tracking software (Sti) has delivered a decade of world-class physics. It was designed to leverage the discrete azimuthal symmetry of the detector and its simple radial ordering of components, permitting a flat representation of the detector geometry in terms of concentric cylinders and planes, and an approximate track propagation code. These design choices reflected a careful balancing of competing priorities, trading precision for speed in track reconstruction. To simplify the task of integrating new detectors, tools were developed to automatically generate the Sti geometry model, tying both reconstruction and simulation to the single source AgML geometry model. The increased precision and complexity of the HFT detector required a careful reassessment of this single geometry path and implementation choices. In this paper we will discuss the test suite and regression tools developed to improve reconstruction with the HFT, our lessons learned in tracking with high precision detectors and the tradeoffs between precision, speed and ease of use which were required.

  17. Portraiture lens concept in a mobile phone camera

    Science.gov (United States)

    Sheil, Conor J.; Goncharov, Alexander V.

    2017-11-01

    A small form-factor lens was designed for the purpose of portraiture photography, the size of which allows use within smartphone casing. The current general requirement of mobile cameras having good all-round performance results in a typical, familiar, many-element design. Such designs have little room for improvement, in terms of the available degrees of freedom and highly-demanding target metrics such as low f-number and wide field of view. However, the specific application of the current portraiture lens relaxed the requirement of an all-round high-performing lens, allowing improvement of certain aspects at the expense of others. With a main emphasis on reducing depth of field (DoF), the current design takes advantage of the simple geometrical relationship between DoF and pupil diameter. The system has a large aperture, while a reasonable f-number gives a relatively large focal length, requiring a catadioptric lens design with double ray path; hence, field of view is reduced. Compared to typical mobile lenses, the large diameter reduces depth of field by a factor of four.

  18. The path of Equal Opportunity at the University of Salerno (1991-2011

    Directory of Open Access Journals (Sweden)

    Maria Rosaria Pelizzari

    2012-10-01

    Full Text Available The subject of this paper is the story of a policy of small steps, implemented by a group of women engaged in promoting the equal opportunities culture at the University of Salerno. The reconstruction of the history of bodies delegated to Equal Opportunities policies is based, first, on the memory of the authoress, who was one of the participants in the activities described. It reconstructs, in a useful way, a path of which, otherwise, you could lose your memory. Events, people and institutions are presented. This context is of particular importance for the establishment, in 2009, of the Documentation Centre on Gender and Equal Opportunities, and, in 2011, of the OGEPO (Observatory for the diffusion of the Gender Studies and the culture of Equal Opportunity, which is connected in a web forum, to provide, among other things, information and advices  about: reconciling work and family life, labor law, issues of career.

  19. Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.

    Science.gov (United States)

    Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C

    2016-10-01

    The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two

  20. The impact of reconstruction method on the quantification of DaTSCAN images

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, John C.; Erlandsson, Kjell; Hutton, Brian F. [UCLH NHS Foundation Trust and University College London, Institute of Nuclear Medicine, London (United Kingdom); Tossici-Bolt, Livia [Southampton University Hospitals NHS Trust, Department of Medical Physics, Southampton (United Kingdom); Sera, Terez [University of Szeged, Department of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Varrone, Andrea [Psychiatry Section and Stockholm Brain Institute, Karolinska Institute, Department of Clinical Neuroscience, Stockholm (Sweden); Tatsch, Klaus [EANM/European Network of Excellence for Brain Imaging, Vienna (Austria)

    2010-01-15

    Reconstruction of DaTSCAN brain studies using OS-EM iterative reconstruction offers better image quality and more accurate quantification than filtered back-projection. However, reconstruction must proceed for a sufficient number of iterations to achieve stable and accurate data. This study assessed the impact of the number of iterations on the image quantification, comparing the results of the iterative reconstruction with filtered back-projection data. A striatal phantom filled with {sup 123}I using striatal to background ratios between 2:1 and 10:1 was imaged on five different gamma camera systems. Data from each system were reconstructed using OS-EM (which included depth-independent resolution recovery) with various combinations of iterations and subsets to achieve up to 200 EM-equivalent iterations and with filtered back-projection. Using volume of interest analysis, the relationships between image reconstruction strategy and quantification of striatal uptake were assessed. For phantom filling ratios of 5:1 or less, significant convergence of measured ratios occurred close to 100 EM-equivalent iterations, whereas for higher filling ratios, measured uptake ratios did not display a convergence pattern. Assessment of the count concentrations used to derive the measured uptake ratio showed that nonconvergence of low background count concentrations caused peaking in higher measured uptake ratios. Compared to filtered back-projection, OS-EM displayed larger uptake ratios because of the resolution recovery applied in the iterative algorithm. The number of EM-equivalent iterations used in OS-EM reconstruction influences the quantification of DaTSCAN studies because of incomplete convergence and possible bias in areas of low activity due to the nonnegativity constraint in OS-EM reconstruction. Nevertheless, OS-EM using 100 EM-equivalent iterations provides the best linear discriminatory measure to quantify the uptake in DaTSCAN studies. (orig.)