WorldWideScience

Sample records for camera path reconstruction

  1. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  2. A filtered backprojection reconstruction algorithm for Compton camera

    Energy Technology Data Exchange (ETDEWEB)

    Lojacono, Xavier; Maxim, Voichita; Peyrin, Francoise; Prost, Remy [Lyon Univ., Villeurbanne (France). CNRS, Inserm, INSA-Lyon, CREATIS, UMR5220; Zoglauer, Andreas [California Univ., Berkeley, CA (United States). Space Sciences Lab.

    2011-07-01

    In this paper we present a filtered backprojection reconstruction algorithm for Compton Camera detectors of particles. Compared to iterative methods, widely used for the reconstruction of images from Compton camera data, analytical methods are fast, easy to implement and avoid convergence issues. The method we propose is exact for an idealized Compton camera composed of two parallel plates of infinite dimension. We show that it copes well with low number of detected photons simulated from a realistic device. Images reconstructed from both synthetic data and realistic ones obtained with Monte Carlo simulations demonstrate the efficiency of the algorithm. (orig.)

  3. Direct cone beam SPECT reconstruction with camera tilt

    International Nuclear Information System (INIS)

    Jianying Li; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.; Zongjian Cao; Tsui, B.M.W.

    1993-01-01

    A filtered backprojection (FBP) algorithm is derived to perform cone beam (CB) single-photon emission computed tomography (SPECT) reconstruction with camera tilt using circular orbits. This algorithm reconstructs the tilted angle CB projection data directly by incorporating the tilt angle into it. When the tilt angle becomes zero, this algorithm reduces to that of Feldkamp. Experimentally acquired phantom studies using both a two-point source and the three-dimensional Hoffman brain phantom have been performed. The transaxial tilted cone beam brain images and profiles obtained using the new algorithm are compared with those without camera tilt. For those slices which have approximately the same distance from the detector in both tilt and non-tilt set-ups, the two transaxial reconstructions have similar profiles. The two-point source images reconstructed from this new algorithm and the tilted cone beam brain images are also compared with those reconstructed from the existing tilted cone beam algorithm. (author)

  4. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  5. Image reconstruction methods for the PBX-M pinhole camera

    International Nuclear Information System (INIS)

    Holland, A.; Powell, E.T.; Fonck, R.J.

    1990-03-01

    This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs

  6. Image reconstruction from limited angle Compton camera data

    International Nuclear Information System (INIS)

    Tomitani, T.; Hirasawa, M.

    2002-01-01

    The Compton camera is used for imaging the distributions of γ ray direction in a γ ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of γ rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection. (author)

  7. Filtered backprojection proton CT reconstruction along most likely paths

    Energy Technology Data Exchange (ETDEWEB)

    Rit, Simon; Dedes, George; Freud, Nicolas; Sarrut, David; Letang, Jean Michel [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, 69008 Lyon (France)

    2013-03-15

    Purpose: Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm. Methods: The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms. Results: The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 - 1.6 mm compared to 1.0 - 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200 Degree-Sign scans. Conclusions: Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.

  8. Superficial vessel reconstruction with a multiview camera system

    Science.gov (United States)

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  9. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yunsu Bok

    2014-11-01

    Full Text Available This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  10. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  11. Preparation and tomographic reconstruction of an arbitrary single-photon path qubit

    International Nuclear Information System (INIS)

    Baek, So-Young; Kim, Yoon-Ho

    2011-01-01

    We report methods for preparation and tomographic reconstruction of an arbitrary single-photon path qubit. The arbitrary single-photon path qubit is prepared losslessly by passing the heralded single-photon state from spontaneous parametric down-conversion through variable beam splitter. Quantum state tomography of the single-photon path qubit is implemented by introducing path-projection measurements based on the first-order single-photon quantum interference. Using the state preparation and path-projection measurements methods for the single-photon path qubit, we demonstrate preparation and complete tomographic reconstruction of the single-photon path qubit with arbitrary purity. -- Highlights: → We report methods for preparation and tomographic reconstruction of an arbitrary single-photon path qubit. → We implement path-projection measurements based on the first-order single-photon quantum interference. → We demonstrate preparation and complete tomographic reconstruction of the single-photon path qubit with arbitrary purity.

  12. Open-path, closed-path and reconstructed aerosol extinction at a rural site.

    Science.gov (United States)

    Gordon, Timothy D; Prenni, Anthony J; Renfro, James R; McClure, Ethan; Hicks, Bill; Onasch, Timothy B; Freedman, Andrew; McMeeking, Gavin R; Chen, Ping

    2018-04-09

    The Handix Scientific Open-Path Cavity Ringdown Spectrometer (OPCRDS) was deployed during summer 2016 in Great Smoky Mountains National Park (GRSM). Extinction coefficients from the relatively new OPCRDS and from a more well-established extinction instrument agreed to within 7%. Aerosol hygroscopic growth (f(RH)) was calculated from the ratio of ambient extinction measured by the OPCRDS to dry extinction measured by a closed-path extinction monitor (Aerodyne's Cavity Attenuated Phase Shift Particulate Matter Extinction Monitor, CAPS PMex). Derived hygroscopicity (RH 1995 at the same site and time of year, which is noteworthy given the decreasing trend for organics and sulfate in the eastern U.S. However, maximum f(RH) values in 1995 were less than half as large as those recorded in 2016-possibly due to nephelometer truncation losses in 1995. Two hygroscopicity parameterizations were investigated using high time resolution OPCRDS+CAPS PMex data, and the K ext model was more accurate than the γ model. Data from the two ambient optical instruments, the OPCRDS and the open-path nephelometer, generally agreed; however, significant discrepancies between ambient scattering and extinction were observed, apparently driven by a combination of hygroscopic growth effects, which tend to increase nephelometer truncation losses and decrease sensitivity to the wavelength difference between the two instruments as a function of particle size. There was not a statistically significant difference in the mean reconstructed extinction values obtained from the original and the revised IMPROVE (Interagency Monitoring of Protected Visual Environments) equations. On average IMPROVE reconstructed extinction was ~25% lower than extinction measured by the OPCRDS, which suggests that the IMPROVE equations and 24-hr aerosol data are moderately successful in estimating current haze levels at GRSM. However, this conclusion is limited by the coarse temporal resolution and the low dynamic range of

  13. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery

    Directory of Open Access Journals (Sweden)

    Marzi Christian

    2017-09-01

    Full Text Available Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.

  14. Fast image reconstruction for Compton camera using stochastic origin ensemble approach.

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2011-01-01

    Compton camera has been proposed as a potential imaging tool in astronomy, industry, homeland security, and medical diagnostics. Due to the inherent geometrical complexity of Compton camera data, image reconstruction of distributed sources can be ineffective and/or time-consuming when using standard techniques such as filtered backprojection or maximum likelihood-expectation maximization (ML-EM). In this article, the authors demonstrate a fast reconstruction of Compton camera data using a novel stochastic origin ensembles (SOE) approach based on Markov chains. During image reconstruction, the origins of the measured events are randomly assigned to locations on conical surfaces, which are the Compton camera analogs of lines-of-responses in PET. Therefore, the image is defined as an ensemble of origin locations of all possible event origins. During the course of reconstruction, the origins of events are stochastically moved and the acceptance of the new event origin is determined by the predefined acceptance probability, which is proportional to the change in event density. For example, if the event density at the new location is higher than in the previous location, the new position is always accepted. After several iterations, the reconstructed distribution of origins converges to a quasistationary state which can be voxelized and displayed. Comparison with the list-mode ML-EM reveals that the postfiltered SOE algorithm has similar performance in terms of image quality while clearly outperforming ML-EM in relation to reconstruction time. In this study, the authors have implemented and tested a new image reconstruction algorithm for the Compton camera based on the stochastic origin ensembles with Markov chains. The algorithm uses list-mode data, is parallelizable, and can be used for any Compton camera geometry. SOE algorithm clearly outperforms list-mode ML-EM for simple Compton camera geometry in terms of reconstruction time. The difference in computational time

  15. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    Science.gov (United States)

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  16. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the

  17. Semantically Documenting Virtual Reconstruction: Building a Path to Knowledge Provenance

    Science.gov (United States)

    Bruseker, G.; Guillem, A.; Carboni, N.

    2015-08-01

    The outcomes of virtual reconstructions of archaeological monuments are not just images for aesthetic consumption but rather present a scholarly argument and decision making process. They are based on complex chains of reasoning grounded in primary and secondary evidence that enable a historically probable whole to be reconstructed from the partial remains left in the archaeological record. This paper will explore the possibilities for documenting and storing in an information system the phases of the reasoning, decision and procedures that a modeler, with the support of an archaeologist, uses during the virtual reconstruction process and how they can be linked to the reconstruction output. The goal is to present a documentation model such that the foundations of evidence for the reconstructed elements, and the reasoning around them, are made not only explicit and interrogable but also can be updated, extended and reused by other researchers in future work. Using as a case-study the reconstruction of a kitchen in a Roman domus in Grand, we will examine the necessary documentation requirements, and the capacity to express it using semantic technologies. For our study we adopt the CIDOC-CRM ontological model, and its extensions CRMinf, CRMBa and CRMgeo as a starting point for modelling the arguments and relations.

  18. Noise in off-axis type holograms including reconstruction and CCD camera parameters

    Energy Technology Data Exchange (ETDEWEB)

    Voelkl, Edgar, E-mail: edgar.voelkl@fei.com [FEI Company, 5350 NE Dawson Creek Drive, Hillsboro, OR 97124-5793 (United States)

    2010-02-15

    Phase and amplitude images as contained in digital holograms are commonly extracted via a process called 'reconstruction'. Expressions for the expected noise in these images have been given in the past by several authors; however, the effect of the actual reconstruction process has not been fully appreciated. By starting with the Quantum Mechanical intensity distribution of the off-axis type interference pattern, then building the digital hologram on an electron-by-electron base while simultaneously reconstructing the phase/amplitude images and evaluating their noise levels, an expression is derived that consistently describes the noise in simulated and experimental phase/amplitude images and contains the reconstruction parameters. Because of the necessity to discretize the intensity distribution function, the digitization effects of an ideal CCD camera had to be included. Subsequently, this allowed a comparison between real and simulated holograms which then led to a comparison between the performance of an 'ideal' CCD camera versus a real device. It was concluded that significant improvement of the phase and amplitude noise may be obtained if CCD cameras were optimized for digitizing intensity distributions at low sampling rates.

  19. ITEM-QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    CERN Document Server

    Pauli, Josef; Anton, G

    2002-01-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both (the algorithm as well as the implementation (http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License (http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  20. Improved electromagnetic tracking for catheter path reconstruction with application in high-dose-rate brachytherapy.

    Science.gov (United States)

    Lugez, Elodie; Sadjadi, Hossein; Joshi, Chandra P; Akl, Selim G; Fichtinger, Gabor

    2017-04-01

    Electromagnetic (EM) catheter tracking has recently been introduced in order to enable prompt and uncomplicated reconstruction of catheter paths in various clinical interventions. However, EM tracking is prone to measurement errors which can compromise the outcome of the procedure. Minimizing catheter tracking errors is therefore paramount to improve the path reconstruction accuracy. An extended Kalman filter (EKF) was employed to combine the nonlinear kinematic model of an EM sensor inside the catheter, with both its position and orientation measurements. The formulation of the kinematic model was based on the nonholonomic motion constraints of the EM sensor inside the catheter. Experimental verification was carried out in a clinical HDR suite. Ten catheters were inserted with mean curvatures varying from 0 to [Formula: see text] in a phantom. A miniaturized Ascension (Burlington, Vermont, USA) trakSTAR EM sensor (model 55) was threaded within each catheter at various speeds ranging from 7.4 to [Formula: see text]. The nonholonomic EKF was applied on the tracking data in order to statistically improve the EM tracking accuracy. A sample reconstruction error was defined at each point as the Euclidean distance between the estimated EM measurement and its corresponding ground truth. A path reconstruction accuracy was defined as the root mean square of the sample reconstruction errors, while the path reconstruction precision was defined as the standard deviation of these sample reconstruction errors. The impacts of sensor velocity and path curvature on the nonholonomic EKF method were determined. Finally, the nonholonomic EKF catheter path reconstructions were compared with the reconstructions provided by the manufacturer's filters under default settings, namely the AC wide notch and the DC adaptive filter. With a path reconstruction accuracy of 1.9 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (2.4 mm) by 21% and the raw EM

  1. 3D RECONSTRUCTION OF AN UNDERWATER ARCHAELOGICAL SITE: COMPARISON BETWEEN LOW COST CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Capra

    2015-04-01

    Full Text Available The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf. Results of tests made on submerged objects with three cameras are presented: © Canon Power Shot G12, © Intova Sport HD e © GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with © Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  2. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    Science.gov (United States)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  3. Reconstruction of data for an experiment using multi-gap spark chambers with six-camera optics

    International Nuclear Information System (INIS)

    Maybury, R.; Daley, H.M.

    1983-06-01

    A program has been developed to reconstruct spark positions in a pair of multi-gap optical spark chambers viewed by six cameras, which were used by a Rutherford Laboratory experiment. The procedure for correlating camera views to calculate spark positions is described. Calibration of the apparatus, and the application of time- and intensity-dependent corrections are discussed. (author)

  4. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  5. Reconstruction of recycling flux from synthetic camera images, evaluated for the Wendelstein 7-X startup limiter

    Science.gov (United States)

    Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team

    2017-12-01

    The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\

  6. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    Science.gov (United States)

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  7. Recursive 3D-reconstruction of structured scenes using a moving camera - application to robotics

    International Nuclear Information System (INIS)

    Boukarri, Bachir

    1989-01-01

    This thesis is devoted to the perception of a structured environment, and proposes a new method which allows a 3D-reconstruction of an interesting part of the world using a mobile camera. Our work is divided into three essential parts dedicated to 2D-information aspect, 3D-information aspect, and a validation of the method. In the first part, we present a method which produces a topologic and geometric image representation based on 'segment' and 'junction' features. Then, a 2D-matching method based on a hypothesis prediction and verification algorithm is proposed to match features issued from two successive images. The second part deals with 3D-reconstruction using a triangulation technique, and discuses our new method introducing an 'Estimation-Construction-Fusion' process. This ensures a complete and accurate 3D-representation, and a permanent position estimation of the camera with respect to the model. The merging process allows refinement of the 3D-representation using a powerful tool: a Kalman Filter. In the last part, experimental results issued from simulated and real data images are reported to show the efficiency of the method. (author) [fr

  8. Reconstruction in PET cameras with irregular sampling and depth of interaction capability

    International Nuclear Information System (INIS)

    Virador, P.R.G.; Moses, W.W.; Huesman, R.H.

    1998-01-01

    The authors present 2D reconstruction algorithms for a rectangular PET camera capable of measuring depth of interaction (DOI). The camera geometry leads to irregular radial and angular sampling of the tomographic data. DOI information increases sampling density, allowing the use of evenly spaced quarter-crystal width radial bins with minimal interpolation of irregularly spaced data. In the regions where DOI does not increase sampling density (chords normal to crystal faces), fine radial sinogram binning leads to zero efficiency bins if uniform angular binning is used. These zero efficiency sinogram bins lead to streak artifacts if not corrected. To minimize these unnormalizable sinogram bins the authors use two angular binning schemes: Fixed Width and Natural Width. Fixed Width uses a fixed angular width except in the problem regions where appropriately chosen widths are applied. Natural Width uses angle widths which are derived from intrinsic detector sampling. Using a modified filtered-backprojection algorithm to accommodate these angular binning schemes, the authors reconstruct artifact free images with nearly isotropic and position independent spatial resolution. Results from Monte Carlo data indicate that they have nearly eliminated image degradation due to crystal penetration

  9. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  10. Optimization and verification of image reconstruction for a Compton camera towards application as an on-line monitor for particle therapy

    Science.gov (United States)

    Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.

    2017-07-01

    Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.

  11. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    Directory of Open Access Journals (Sweden)

    Yufu Qu

    2018-01-01

    Full Text Available In order to reconstruct three-dimensional (3D structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  12. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    Science.gov (United States)

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  13. Path integration guided with a quality map for shape reconstruction in the fringe reflection technique

    Science.gov (United States)

    Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu

    2018-04-01

    A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.

  14. Iterative reconstruction of SiPM light response functions in a square-shaped compact gamma camera

    Science.gov (United States)

    Morozov, A.; Alves, F.; Marcos, J.; Martins, R.; Pereira, L.; Solovov, V.; Chepel, V.

    2017-05-01

    Compact gamma cameras with a square-shaped monolithic scintillator crystal and an array of silicon photomultipliers (SiPMs) are actively being developed for applications in areas such as small animal imaging, cancer diagnostics and radiotracer guided surgery. Statistical methods of position reconstruction, which are potentially superior to the traditional centroid method, require accurate knowledge of the spatial response of each photomultiplier. Using both Monte Carlo simulations and experimental data obtained with a camera prototype, we show that the spatial response of all photomultipliers (light response functions) can be parameterized with axially symmetric functions obtained iteratively from flood field irradiation data. The study was performed with a camera prototype equipped with a 30  ×  30  ×  2 mm3 LYSO crystal and an 8  ×  8 array of SiPMs for 140 keV gamma rays. The simulations demonstrate that the images, reconstructed with the maximum likelihood method using the response obtained with the iterative approach, exhibit only minor distortions: the average difference between the reconstructed and the true positions in X and Y directions does not exceed 0.2 mm in the central area of 22  ×  22 mm2 and 0.4 mm at the periphery of the camera. A similar level of image distortions is shown experimentally with the camera prototype.

  15. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  16. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  17. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Science.gov (United States)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  18. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Directory of Open Access Journals (Sweden)

    K. Thoeni

    2014-06-01

    Full Text Available This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS. Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp, iPhone 4S (8 Mp, Panasonic Lumix LX5 (9.5 Mp, Panasonic Lumix ZS20 (14.1 Mp and Canon EOS 7D (18 Mp. The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  19. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A; Fraga, F A F; Fraga, M M F R; Margato, L M S; Pereira, L [LIP-Coimbra and Departamento de Física, Universidade de Coimbra, Rua Larga, Coimbra (Portugal); Defendi, I; Jurkovic, M [Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II), TUM, Lichtenbergstr. 1, Garching (Germany); Engels, R; Kemmerling, G [Zentralinstitut für Elektronik, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, Jülich (Germany); Gongadze, A; Guerard, B; Manzin, G; Niko, H; Peyaud, A; Piscitelli, F [Institut Laue Langevin, 6 Rue Jules Horowitz, Grenoble (France); Petrillo, C; Sacchetti, F [Istituto Nazionale per la Fisica della Materia, Unità di Perugia, Via A. Pascoli, Perugia (Italy); Raspino, D; Rhodes, N J; Schooneveld, E M, E-mail: andrei@coimbra.lip.pt [Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot (United Kingdom); others, and

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/∼andrei/.

  20. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    Science.gov (United States)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  1. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  2. Summer Student Project Report. Parallelization of the path reconstruction algorithm for the inner detector of the ATLAS experiment.

    CERN Document Server

    Maldonado Puente, Bryan Patricio

    2014-01-01

    The inner detector of the ATLAS experiment has two types of silicon detectors used for tracking: Pixel Detector and SCT (semiconductor tracker). Once a proton-proton collision occurs, the result- ing particles pass through these detectors and these are recorded as hits on the detector surfaces. A medium to high energy particle passes through seven different surfaces of the two detectors, leaving seven hits, while lower energy particles can leave many more hits as they circle through the detector. For a typical event during the expected operational conditions, there are 30 000 hits in average recorded by the sensors. Only high energy particles are of interest for physics analysis and are taken into account for the path reconstruction; thus, a filtering process helps to discard the low energy particles produced in the collision. The following report presents a solution for increasing the speed of the filtering process in the path reconstruction algorithm.

  3. Reconstructing the landing trajectory of the CE-3 lunar probe by using images from the landing camera

    International Nuclear Information System (INIS)

    Liu Jian-Jun; Yan Wei; Li Chun-Lai; Tan Xu; Ren Xin; Mu Ling-Li

    2014-01-01

    An accurate determination of the landing trajectory of Chang'e-3 (CE-3) is significant for verifying orbital control strategy, optimizing orbital planning, accurately determining the landing site of CE-3 and analyzing the geological background of the landing site. Due to complexities involved in the landing process, there are some differences between the planned trajectory and the actual trajectory of CE-3. The landing camera on CE-3 recorded a sequence of the landing process with a frequency of 10 frames per second. These images recorded by the landing camera and high-resolution images of the lunar surface are utilized to calculate the position of the probe, so as to reconstruct its precise trajectory. This paper proposes using the method of trajectory reconstruction by Single Image Space Resection to make a detailed study of the hovering stage at a height of 100 m above the lunar surface. Analysis of the data shows that the closer CE-3 came to the lunar surface, the higher the spatial resolution of images that were acquired became, and the more accurately the horizontal and vertical position of CE-3 could be determined. The horizontal and vertical accuracies were 7.09 m and 4.27 m respectively during the hovering stage at a height of 100.02 m. The reconstructed trajectory can reflect the change in CE-3's position during the powered descent process. A slight movement in CE-3 during the hovering stage is also clearly demonstrated. These results will provide a basis for analysis of orbit control strategy, and it will be conducive to adjustment and optimization of orbit control strategy in follow-up missions

  4. Contribution to the reconstruction of scenes made of cylindrical and polyhedral objects from sequences of images obtained by a moving camera

    International Nuclear Information System (INIS)

    Viala, Marc

    1992-01-01

    Environment perception is an important process which enables a robot to perform actions in an unknown scene. Although many sensors exist to 'give sight', the camera seems to play a leading part. This thesis deals with the reconstruction of scenes made of cylindrical and polyhedral objects from sequences of images provided by a moving camera. Two methods are presented. Both are based on the evolution of apparent contours of objects in a sequence. The first approach has been developed considering that camera motion is known. Despite the good results obtained by this method, the specific conditions it requires makes its use limited. In order to avoid an accurate evaluation of camera motion, we introduce another method allowing, at the same time, to estimate the object parameters and camera positions. In this approach, only is needed a 'poor' knowledge of camera displacements supplied by the control system of the robotic platform, in which the camera is embedded. An optimal integration of a priori information, as well as the dynamic feature of the state model to estimate, lead us to use the Kalman filter. Experiments conducted with synthetic and real images proved the reliability of these methods. Camera calibration set-up is also suggested to achieve the most accurate scene models resulting from reconstruction processes. (author) [fr

  5. Gamma-ray detection and Compton camera image reconstruction with application to hadron therapy

    International Nuclear Information System (INIS)

    Frandes, M.

    2010-09-01

    A novel technique for radiotherapy - hadron therapy - irradiates tumors using a beam of protons or carbon ions. Hadron therapy is an effective technique for cancer treatment, since it enables accurate dose deposition due to the existence of a Bragg peak at the end of particles range. Precise knowledge of the fall-off position of the dose with millimeters accuracy is critical since hadron therapy proved its efficiency in case of tumors which are deep-seated, close to vital organs, or radio-resistant. A major challenge for hadron therapy is the quality assurance of dose delivery during irradiation. Current systems applying positron emission tomography (PET) technologies exploit gamma rays from the annihilation of positrons emitted during the beta decay of radioactive isotopes. However, the generated PET images allow only post-therapy information about the deposed dose. In addition, they are not in direct coincidence with the Bragg peak. A solution is to image the complete spectrum of the emitted gamma rays, including nuclear gamma rays emitted by inelastic interactions of hadrons to generated nuclei. This emission is isotropic, and has a spectrum ranging from 100 keV up to 20 MeV. However, the measurement of these energetic gamma rays from nuclear reactions exceeds the capability of all existing medical imaging systems. An advanced Compton scattering detection method with electron tracking capability is proposed, and modeled to reconstruct the high-energy gamma-ray events. This Compton detection technique was initially developed to observe gamma rays for astrophysical purposes. A device illustrating the method was designed and adapted to Hadron Therapy Imaging (HTI). It consists of two main sub-systems: a tracker where Compton recoiled electrons are measured, and a calorimeter where the scattered gamma rays are absorbed via the photoelectric effect. Considering a hadron therapy scenario, the analysis of generated data was performed, passing trough the complete

  6. Pose and Shape Reconstruction of a Noncooperative Spacecraft Using Camera and Range Measurements

    Directory of Open Access Journals (Sweden)

    Renato Volpe

    2017-01-01

    Full Text Available Recent interest in on-orbit proximity operations has pushed towards the development of autonomous GNC strategies. In this sense, optical navigation enables a wide variety of possibilities as it can provide information not only about the kinematic state but also about the shape of the observed object. Various mission architectures have been either tested in space or studied on Earth. The present study deals with on-orbit relative pose and shape estimation with the use of a monocular camera and a distance sensor. The goal is to develop a filter which estimates an observed satellite’s relative position, velocity, attitude, and angular velocity, along with its shape, with the measurements obtained by a camera and a distance sensor mounted on board a chaser which is on a relative trajectory around the target. The filter’s efficiency is proved with a simulation on a virtual target object. The results of the simulation, even though relevant to a simplified scenario, show that the estimation process is successful and can be considered a promising strategy for a correct and safe docking maneuver.

  7. Light Source Estimation with Analytical Path-tracing

    OpenAIRE

    Kasper, Mike; Keivan, Nima; Sibley, Gabe; Heckman, Christoffer

    2017-01-01

    We present a novel algorithm for light source estimation in scenes reconstructed with a RGB-D camera based on an analytically-derived formulation of path-tracing. Our algorithm traces the reconstructed scene with a custom path-tracer and computes the analytical derivatives of the light transport equation from principles in optics. These derivatives are then used to perform gradient descent, minimizing the photometric error between one or more captured reference images and renders of our curre...

  8. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  9. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition

    Directory of Open Access Journals (Sweden)

    John F. McEvoy

    2016-03-01

    Full Text Available The use of unmanned aerial vehicles (UAVs for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models or 40m above individuals (multirotor models. Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys.

  10. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  11. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    Science.gov (United States)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  12. A direct-view customer-oriented digital holographic camera

    Science.gov (United States)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  13. Image-based path planning for automated virtual colonoscopy navigation

    Science.gov (United States)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  14. Gamma-ray detection and Compton camera image reconstruction with application to hadron therapy; Detection des rayons gamma et reconstruction d'images pour la camera Compton: Application a l'hadrontherapie

    Energy Technology Data Exchange (ETDEWEB)

    Frandes, M.

    2010-09-15

    A novel technique for radiotherapy - hadron therapy - irradiates tumors using a beam of protons or carbon ions. Hadron therapy is an effective technique for cancer treatment, since it enables accurate dose deposition due to the existence of a Bragg peak at the end of particles range. Precise knowledge of the fall-off position of the dose with millimeters accuracy is critical since hadron therapy proved its efficiency in case of tumors which are deep-seated, close to vital organs, or radio-resistant. A major challenge for hadron therapy is the quality assurance of dose delivery during irradiation. Current systems applying positron emission tomography (PET) technologies exploit gamma rays from the annihilation of positrons emitted during the beta decay of radioactive isotopes. However, the generated PET images allow only post-therapy information about the deposed dose. In addition, they are not in direct coincidence with the Bragg peak. A solution is to image the complete spectrum of the emitted gamma rays, including nuclear gamma rays emitted by inelastic interactions of hadrons to generated nuclei. This emission is isotropic, and has a spectrum ranging from 100 keV up to 20 MeV. However, the measurement of these energetic gamma rays from nuclear reactions exceeds the capability of all existing medical imaging systems. An advanced Compton scattering detection method with electron tracking capability is proposed, and modeled to reconstruct the high-energy gamma-ray events. This Compton detection technique was initially developed to observe gamma rays for astrophysical purposes. A device illustrating the method was designed and adapted to Hadron Therapy Imaging (HTI). It consists of two main sub-systems: a tracker where Compton recoiled electrons are measured, and a calorimeter where the scattered gamma rays are absorbed via the photoelectric effect. Considering a hadron therapy scenario, the analysis of generated data was performed, passing trough the complete

  15. Contribution to the tracking and the 3D reconstruction of scenes composed of torus from image sequences a acquired by a moving camera

    International Nuclear Information System (INIS)

    Naudet, S.

    1997-01-01

    The three-dimensional perception of the environment is often necessary for a robot to correctly perform its tasks. One solution, based on the dynamic vision, consists in analysing time-varying monocular images to estimate the spatial geometry of the scene. This thesis deals with the reconstruction of torus by dynamic vision. Though this object class is restrictive, it enables to tackle the problem of reconstruction of bent pipes usually encountered in industrial environments. The proposed method is based on the evolution of apparent contours of objects in the sequence. Using the expression of torus limb boundaries, it is possible to recursively estimate the object three-dimensional parameters by minimising the error between the predicted projected contours and the image contours. This process, which is performed by a Kalman filter, does not need a precise knowledge of the camera displacement or any matching of the tow limbs belonging to the same object. To complete this work, temporal tracking of objects which deals with occlusion situations is proposed. The approach consists in modeling and interpreting the apparent motion of objects in the successive images. The motion interpretation, based on a simplified representation of the scene, allows to recover pertinent three-dimensional information which is used to manage occlusion situations. Experiments, on synthetic and real images, proves he validity of the tracking and the reconstruction processes. (author)

  16. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  17. A hidden Markov model for reconstructing animal paths from solar geolocation loggers using templates for light intensity

    NARCIS (Netherlands)

    Rakhimberdiev, Eldar; Winkler, David W; Bridge, Eli; Seavy, Nathaniel E; Sheldon, Daniel; Piersma, Theunis; Saveliev, Anatoly

    2015-01-01

    BACKGROUND: Solar archival tags (henceforth called geolocators) are tracking devices deployed on animals to reconstruct their long-distance movements on the basis of locations inferred post hoc with reference to the geographical and seasonal variations in the timing and speeds of sunrise and sunset.

  18. Contribution to the tracking and the 3D reconstruction of scenes composed of torus from image sequences a acquired by a moving camera; Contribution au suivi et a la reconstruction de scenes constituees d`objet toriques a partir de sequences d`images acquises par une camera mobile

    Energy Technology Data Exchange (ETDEWEB)

    Naudet, S

    1997-01-31

    The three-dimensional perception of the environment is often necessary for a robot to correctly perform its tasks. One solution, based on the dynamic vision, consists in analysing time-varying monocular images to estimate the spatial geometry of the scene. This thesis deals with the reconstruction of torus by dynamic vision. Though this object class is restrictive, it enables to tackle the problem of reconstruction of bent pipes usually encountered in industrial environments. The proposed method is based on the evolution of apparent contours of objects in the sequence. Using the expression of torus limb boundaries, it is possible to recursively estimate the object three-dimensional parameters by minimising the error between the predicted projected contours and the image contours. This process, which is performed by a Kalman filter, does not need a precise knowledge of the camera displacement or any matching of the tow limbs belonging to the same object. To complete this work, temporal tracking of objects which deals with occlusion situations is proposed. The approach consists in modeling and interpreting the apparent motion of objects in the successive images. The motion interpretation, based on a simplified representation of the scene, allows to recover pertinent three-dimensional information which is used to manage occlusion situations. Experiments, on synthetic and real images, proves he validity of the tracking and the reconstruction processes. (author) 127 refs.

  19. How physics teachers approach innovation: An empirical study for reconstructing the appropriation path in the case of special relativity

    Directory of Open Access Journals (Sweden)

    Anna De Ambrosis

    2010-08-01

    Full Text Available This paper concerns an empirical study carried out with a group of high school physics teachers engaged in the Module on relativity of a Master course on the teaching of modern physics. The study is framed within the general research issue of how to promote innovation in school via teachers’ education and how to foster fruitful interactions between research and school practice via the construction of networks of researchers and teachers. In the paper, the problems related to innovation are addressed by focusing on the phase during which teachers analyze an innovative teaching proposal in the perspective of designing their own paths for the class work. The proposal analyzed in this study is Taylor and Wheeler’s approach for teaching special relativity. The paper aims to show that the roots of problems known in the research literature about teachers’ difficulties in coping with innovative proposals, and usually related to the implementation process, can be found and addressed already when teachers approach the proposal and try to appropriate it. The study is heuristic and has been carried out in order to trace the “appropriation path,” followed by the group of teachers, in terms of the main steps and factors triggering the progressive evolution of teachers’ attitudes and competences.

  20. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an

  1. A hidden Markov model for reconstructing animal paths from solar geolocation loggers using templates for light intensity.

    Science.gov (United States)

    Rakhimberdiev, Eldar; Winkler, David W; Bridge, Eli; Seavy, Nathaniel E; Sheldon, Daniel; Piersma, Theunis; Saveliev, Anatoly

    2015-01-01

    Solar archival tags (henceforth called geolocators) are tracking devices deployed on animals to reconstruct their long-distance movements on the basis of locations inferred post hoc with reference to the geographical and seasonal variations in the timing and speeds of sunrise and sunset. The increased use of geolocators has created a need for analytical tools to produce accurate and objective estimates of migration routes that are explicit in their uncertainty about the position estimates. We developed a hidden Markov chain model for the analysis of geolocator data. This model estimates tracks for animals with complex migratory behaviour by combining: (1) a shading-insensitive, template-fit physical model, (2) an uncorrelated random walk movement model that includes migratory and sedentary behavioural states, and (3) spatially explicit behavioural masks. The model is implemented in a specially developed open source R package FLightR. We used the particle filter (PF) algorithm to provide relatively fast model posterior computation. We illustrate our modelling approach with analysis of simulated data for stationary tags and of real tracks of both a tree swallow Tachycineta bicolor migrating along the east and a golden-crowned sparrow Zonotrichia atricapilla migrating along the west coast of North America. We provide a model that increases accuracy in analyses of noisy data and movements of animals with complicated migration behaviour. It provides posterior distributions for the positions of animals, their behavioural states (e.g., migrating or sedentary), and distance and direction of movement. Our approach allows biologists to estimate locations of animals with complex migratory behaviour based on raw light data. This model advances the current methods for estimating migration tracks from solar geolocation, and will benefit a fast-growing number of tracking studies with this technology.

  2. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  3. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  4. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  5. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  6. A Compton camera application for the GAMOS GEANT4-based framework

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J., E-mail: ljh@ns.ph.liv.ac.uk [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Arce, P. [Department of Basic Research, CIEMAT, Madrid (Spain); Judson, D.S.; Boston, A.J.; Boston, H.C.; Cresswell, J.R.; Dormand, J.; Jones, M.; Nolan, P.J.; Sampson, J.A.; Scraggs, D.P.; Sweeney, A. [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom)

    2012-04-11

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  7. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  8. The image camera of the 17 m diameter air Cherenkov telescope MAGIC

    CERN Document Server

    Ostankov, A P

    2001-01-01

    The image camera of the 17 m diameter MAGIC telescope, an air Cherenkov telescope currently under construction to be installed at the Canary island La Palma, is described. The main goal of the experiment is to cover the unexplored energy window from approx 10 to approx 300 GeV in gamma-ray astrophysics. In its first phase with a classical PMT camera the MAGIC telescope is expected to reach an energy threshold of approx 30 GeV. The operational conditions, the special characteristics of the developed PMTs and their use with light concentrators, the fast signal transfer scheme using analog optical links, the trigger and DAQ organization as well as image reconstruction strategy are described. The different paths being explored towards future camera improvements, in particular the constraints in using silicon avalanche photodiodes and GaAsP hybrid photodetectors in air Cherenkov telescopes are discussed.

  9. Path Dependency

    OpenAIRE

    Mark Setterfield

    2015-01-01

    Path dependency is defined, and three different specific concepts of path dependency – cumulative causation, lock in, and hysteresis – are analyzed. The relationships between path dependency and equilibrium, and path dependency and fundamental uncertainty are also discussed. Finally, a typology of dynamical systems is developed to clarify these relationships.

  10. Reduction in camera-specific variability in [{sup 123}I]FP-CIT SPECT outcome measures by image reconstruction optimized for multisite settings: impact on age-dependence of the specific binding ratio in the ENC-DAT database of healthy controls

    Energy Technology Data Exchange (ETDEWEB)

    Buchert, Ralph; Lange, Catharina [Charite - Universitaetsmedizin Berlin, Department of Nuclear Medicine, Berlin (Germany); Kluge, Andreas; Bronzel, Marcus [ABX-CRO advanced pharmaceutical services Forschungsgesellschaft m.b.H., Dresden (Germany); Tossici-Bolt, Livia [University Hospital Southampton NHS Foundation Trust, Department of Medical Physics, Southampton (United Kingdom); Dickson, John [University College London Hospital NHS Foundation Trust, Institute of Nuclear Medicine, London (United Kingdom); Asenbaum, Susanne [Medical University of Vienna, Department of Nuclear Medicine, Vienna (Austria); Booij, Jan [University of Amsterdam, Department of Nuclear Medicine, Academic Medical Centre, Amsterdam (Netherlands); Kapucu, L. Oezlem Atay [Gazi University, Department of Nuclear Medicine, Faculty of Medicine, Ankara (Turkey); Svarer, Claus [Rigshospitalet and University of Copenhagen, Neurobiology Research Unit, Copenhagen (Denmark); Koulibaly, Pierre-Malick [University of Nice-Sophia Antipolis, Nuclear Medicine Department, Centre Antoine Lacassagne, Nice (France); Nobili, Flavio [University of Genoa, Department of Neuroscience (DINOGMI), Clinical Neurology Unit, Genoa (Italy); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Sabri, Osama [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Sera, Terez [University of Szeged, Department of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Tatsch, Klaus [Municipal Hospital of Karlsruhe Inc, Department of Nuclear Medicine, Karlsruhe (Germany); Borght, Thierry vander [CHU Namur, IREC, Nuclear Medicine Division, Universite catholique de Louvain, Yvoir (Belgium); Laere, Koen van [University Hospital and K.U. Leuven, Nuclear Medicine, Leuven (Belgium); Varrone, Andrea [Karolinska University Hospital, Department of Clinical Neuroscience, Centre for Psychiatry Research, Karolinska Institutet, Stockholm (Sweden); Iida, Hidehiro [National Cerebral and Cardiovascular Center - Research Institute, Osaka (Japan)

    2016-07-15

    Quantitative estimates of dopamine transporter availability, determined with [{sup 123}I]FP-CIT SPECT, depend on the SPECT equipment, including both hardware and (reconstruction) software, which limits their use in multicentre research and clinical routine. This study tested a dedicated reconstruction algorithm for its ability to reduce camera-specific intersubject variability in [{sup 123}I]FP-CIT SPECT. The secondary aim was to evaluate binding in whole brain (excluding striatum) as a reference for quantitative analysis. Of 73 healthy subjects from the European Normal Control Database of [{sup 123}I]FP-CIT recruited at six centres, 70 aged between 20 and 82 years were included. SPECT images were reconstructed using the QSPECT software package which provides fully automated detection of the outer contour of the head, camera-specific correction for scatter and septal penetration by transmission-dependent convolution subtraction, iterative OSEM reconstruction including attenuation correction, and camera-specific ''to kBq/ml'' calibration. LINK and HERMES reconstruction were used for head-to-head comparison. The specific striatal [{sup 123}I]FP-CIT binding ratio (SBR) was computed using the Southampton method with binding in the whole brain, occipital cortex or cerebellum as the reference. The correlation between SBR and age was used as the primary quality measure. The fraction of SBR variability explained by age was highest (1) with QSPECT, independently of the reference region, and (2) with whole brain as the reference, independently of the reconstruction algorithm. QSPECT reconstruction appears to be useful for reduction of camera-specific intersubject variability of [{sup 123}I]FP-CIT SPECT in multisite and single-site multicamera settings. Whole brain excluding striatal binding as the reference provides more stable quantitative estimates than occipital or cerebellar binding. (orig.)

  11. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  12. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  13. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  14. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  15. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  16. Path Expressions

    Science.gov (United States)

    1975-06-01

    Traditionally, synchronization of concurrent processes is coded in line by operations on semaphores or similar objects. Path expressions move the...discussion about a variety of synchronization primitives . An analysis of their relative power is found in [3]. Path expressions do not introduce yet...another synchronization primitive . A path expression relates to such primitives as a for- or while-statement of an ALGOL-like language relates to a JUMP

  17. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  18. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  19. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  20. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  1. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  2. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  3. Path Dependence

    DEFF Research Database (Denmark)

    Madsen, Mogens Ove

    Begrebet Path Dependence blev oprindelig udviklet inden for New Institutionel Economics af bl.a. David, Arthur og North. Begrebet har spredt sig vidt i samfundsvidenskaberne og undergået en udvikling. Dette paper propagerer for at der er sket så en så omfattende udvikling af begrebet, at man nu kan...... tale om 1. og 2. generation af Path Dependence begrebet. Den nyeste udvikling af begrebet har relevans for metodologi-diskusionerne i relation til Keynes...

  4. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  5. Unified framework for recognition, localization and mapping using wearable cameras.

    Science.gov (United States)

    Vázquez-Martín, Ricardo; Bandera, Antonio

    2012-08-01

    Monocular approaches to simultaneous localization and mapping (SLAM) have recently addressed with success the challenging problem of the fast computation of dense reconstructions from a single, moving camera. Thus, if these approaches initially relied on the detection of a reduced set of interest points to estimate the camera position and the map, they are currently able to reconstruct dense maps from a handheld camera while the camera coordinates are simultaneously computed. However, these maps of 3-dimensional points usually remain meaningless, that is, with no memorable items and without providing a way of encoding spatial relationships between objects and paths. In humans and mobile robotics, landmarks play a key role in the internalization of a spatial representation of an environment. They are memorable cues that can serve to define a region of the space or the location of other objects. In a topological representation of the space, landmarks can be identified and located according to its structural, perceptive or semantic significance and distinctiveness. But on the other hand, landmarks may be difficult to be located in a metric representation of the space. Restricted to the domain of visual landmarks, this work describes an approach where the map resulting from a point-based, monocular SLAM is annotated with the semantic information provided by a set of distinguished landmarks. Both features are obtained from the image. Hence, they can be linked by associating to each landmark all those point-based features that are superimposed to the landmark in a given image (key-frame). Visual landmarks will be obtained by means of an object-based, bottom-up attention mechanism, which will extract from the image a set of proto-objects. These proto-objects could not be always associated with natural objects, but they will typically constitute significant parts of these scene objects and can be appropriately annotated with semantic information. Moreover, they will be

  6. Path Creation

    DEFF Research Database (Denmark)

    Karnøe, Peter; Garud, Raghu

    2012-01-01

    This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts. Competenc......This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts....... Competencies emerged through processes and mechanisms such as co-creation that implicated multiple learning processes. The process was not an orderly linear one as emergent contingencies influenced the learning processes. An implication is that public policy to catalyse clusters cannot be based...

  7. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  8. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  9. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  10. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  11. CCD camera system for use with a streamer chamber

    International Nuclear Information System (INIS)

    Angius, S.A.; Au, R.; Crawley, G.C.; Djalali, C.; Fox, R.; Maier, M.; Ogilvie, C.A.; Molen, A. van der; Westfall, G.D.; Tickle, R.S.

    1988-01-01

    A system based on three charge-coupled-device (CCD) cameras is described here. It has been used to acquire images from a streamer chamber and consists of three identical subsystems, one for each camera. Each subsystem contains an optical lens, CCD camera head, camera controller, an interface between the CCD and a microprocessor, and a link to a minicomputer for data recording and on-line analysis. Image analysis techniques have been developed to enhance the quality of the particle tracks. Some steps have been made to automatically identify tracks and reconstruct the event. (orig.)

  12. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  13. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  14. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  15. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  16. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  17. Image Reconstruction. Chapter 13

    Energy Technology Data Exchange (ETDEWEB)

    Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)

    2014-12-15

    This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.

  18. A SPECT demonstrator—revival of a gamma camera

    Science.gov (United States)

    Valastyán, I.; Kerek, A.; Molnár, J.; Novák, D.; Végh, J.; Emri, M.; Trón, L.

    2006-07-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied.

  19. A SPECT demonstrator-revival of a gamma camera

    International Nuclear Information System (INIS)

    Valastyan, I.; Kerek, A.; Molnar, J.; Novak, D.; Vegh, J.; Emri, M.; Tron, L.

    2006-01-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied

  20. A cosmic ray muon going through CMS with the magnet at full field. The line shows the path of the muon reconstructed from information recorded in the various detectors.

    CERN Multimedia

    Ianna, Osborne

    2007-01-01

    The event display of the event 3981 from the MTCC run 2605. The data has been taken with a magnetic field of 3.8 T. A detailed model of the magnetic field corresponding to 4T is shown as a color gradient from 4T in the center (red) to 0 T outside of the detector (blue). The cosmic muon has been detected by all four detectors participating in the run: the drift tubes, the HCAL, the tracker and the ECAL subdetectors and it has been reconstructed online. The event display shows the reconstructed 4D segments in the drift tubes (magenta), the reconstructed hits in HCAL (blue), the locally reconstructed track in the tracker (green), the uncalibrated rec hits in ECAL (light green). A muon track was reconstructed in the drift tubes and extrapolated back into the detector taking the magnetic field into account (green).

  1. Structured light optical microscopy for three-dimensional reconstruction of technical surfaces

    Science.gov (United States)

    Kettel, Johannes; Reinecke, Holger; Müller, Claas

    2016-04-01

    In microsystems technology quality control of micro structured surfaces with different surface properties is playing an ever more important role. The process of quality control incorporates three-dimensional (3D) reconstruction of specularand diffusive reflecting technical surfaces. Due to the demand on high measurement accuracy and data acquisition rates, structured light optical microscopy has become a valuable solution to solve this problem providing high vertical and lateral resolution. However, 3D reconstruction of specular reflecting technical surfaces still remains a challenge to optical measurement principles. In this paper we present a measurement principle based on structured light optical microscopy which enables 3D reconstruction of specular- and diffusive reflecting technical surfaces. It is realized using two light paths of a stereo microscope equipped with different magnification levels. The right optical path of the stereo microscope is used to project structured light onto the object surface. The left optical path is used to capture the structured illuminated object surface with a camera. Structured light patterns are generated by a Digital Light Processing (DLP) device in combination with a high power Light Emitting Diode (LED). Structured light patterns are realized as a matrix of discrete light spots to illuminate defined areas on the object surface. The introduced measurement principle is based on multiple and parallel processed point measurements. Analysis of the measured Point Spread Function (PSF) by pattern recognition and model fitting algorithms enables the precise calculation of 3D coordinates. Using exemplary technical surfaces we demonstrate the successful application of our measurement principle.

  2. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  3. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  4. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  5. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  6. Path Creation, Path Dependence and Breaking Away from the Path

    OpenAIRE

    Wang, Jens; Hedman, Jonas; Tuunainen, Virpi Kristiina

    2016-01-01

    The explanation of how and why firms succeed or fail is a recurrent research challenge. This is particularly important in the context of technological innovations. We focus on the role of historical events and decisions in explaining such success and failure. Using a case study of Nokia, we develop and extend a multi-layer path dependence framework. We identify four layers of path dependence: technical, strategic and leadership, organizational, and external collaboration. We show how path dep...

  7. Reconstruction of 3D PIV data in complicated experimental arrangements

    Directory of Open Access Journals (Sweden)

    Pavlík David

    2017-01-01

    Full Text Available In this paper a three-dimensional reconstruction of flow field behind flat plate representing a wing is presented. The reconstruction is always performed for pair of 2D vector maps obtained by 3D PIV with two cameras which record measurement area from different locations. Three-dimensional reconstruction can be obtained in various ways. This paper summarizes two: the reconstruction based on the known correspondences and the reconstruction based on the knowledge of intrinsic and extrinsic parameters of cameras. The methods can be used in the cases when it is impossible to use a calibration pattern or when reconstruction by commercial software fails.

  8. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  9. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  10. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  11. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  12. Holographic interferometry using a digital photo-camera

    International Nuclear Information System (INIS)

    Sekanina, H.; Hledik, S.

    2001-01-01

    The possibilities of running digital holographic interferometry using commonly available compact digital zoom photo-cameras are studied. The recently developed holographic setup, suitable especially for digital photo-cameras equipped with an un detachable object lens, is used. The method described enables a simple and straightforward way of both recording and reconstructing of a digital holographic interferograms. The feasibility of the new method is verified by digital reconstruction of the interferograms acquired, using a numerical code based on the fast Fourier transform. Experimental results obtained are presented and discussed. (authors)

  13. Feynman's path integrals and Bohm's particle paths

    International Nuclear Information System (INIS)

    Tumulka, Roderich

    2005-01-01

    Both Bohmian mechanics, a version of quantum mechanics with trajectories, and Feynman's path integral formalism have something to do with particle paths in space and time. The question thus arises how the two ideas relate to each other. In short, the answer is, path integrals provide a re-formulation of Schroedinger's equation, which is half of the defining equations of Bohmian mechanics. I try to give a clear and concise description of the various aspects of the situation. (letters and comments)

  14. Path coupling and aggregate path coupling

    CERN Document Server

    Kovchegov, Yevgeniy

    2018-01-01

    This book describes and characterizes an extension to the classical path coupling method applied to statistical mechanical models, referred to as aggregate path coupling. In conjunction with large deviations estimates, the aggregate path coupling method is used to prove rapid mixing of Glauber dynamics for a large class of statistical mechanical models, including models that exhibit discontinuous phase transitions which have traditionally been more difficult to analyze rigorously. The book shows how the parameter regions for rapid mixing for several classes of statistical mechanical models are derived using the aggregate path coupling method.

  15. A space-time tomography algorithm for the five-camera soft X-ray diagnostic at RTP

    Energy Technology Data Exchange (ETDEWEB)

    Lyadina, E.S.; Tanzi, C.P.; Cruz, D.F. da; Donne, A.J.H. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands)

    1993-12-31

    A five-camera soft x-ray with 80 detector channels has been installed on the RTP tokamak with the object of studying MHD processes with a relatively high poloidal mode number (m=4). Numerical tomographic reconstruction algorithms used to reconstruct the plasma emissivity profile are constrained by the characteristics of the system. Especially high poloidal harmonics, which can be resolved due to the high number of cameras, can be strongly distorted by stochastic and systematic errors. Furthermore, small uncertainties in the relative position of the cameras in a multiple camera system can lead to strong artefacts in the reconstruction. (author) 6 refs., 4 figs.

  16. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  17. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  18. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  19. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  20. Collimator trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    Jaszczak, R.J.

    1977-01-01

    A collimator is provided for a scintillation camera system in which a detector precesses in an orbit about a patient. The collimator is designed to have high resolution and lower sensitivity with respect to radiation traveling in paths laying wholly within planes perpendicular to the cranial-caudal axis of the patient. The collimator has high sensitivity and lower resolution to radiation traveling in other planes. Variances in resolution and sensitivity are achieved by altering the length, spacing or thickness of the septa of the collimator

  1. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  2. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  3. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  4. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  5. Depth profile measurement with lenslet images of the plenoptic camera

    Science.gov (United States)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  6. The Compton Camera - medical imaging with higher sensitivity Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    The Compton Camera reconstructs the origin of Compton-scattered X-rays using electronic collimation with Silicon pad detectors instead of the heavy conventional lead collimators in Anger cameras - reaching up to 200 times better sensitivity and a factor two improvement in resolution. Possible applications are in cancer diagnosis, neurology neurobiology, and cardiology.

  7. Fractional path planning and path tracking

    International Nuclear Information System (INIS)

    Melchior, P.; Jallouli-Khlif, R.; Metoui, B.

    2011-01-01

    This paper presents the main results of the application of fractional approach in path planning and path tracking. A new robust path planning design for mobile robot was studied in dynamic environment. The normalized attractive force applied to the robot is based on a fictitious fractional attractive potential. This method allows to obtain robust path planning despite robot mass variation. The danger level of each obstacles is characterized by the fractional order of the repulsive potential of the obstacles. Under these conditions, the robot dynamic behavior was studied by analyzing its X - Y path planning with dynamic target or dynamic obstacles. The case of simultaneously mobile obstacles and target is also considered. The influence of the robot mass variation is studied and the robustness analysis of the obtained path shows the robustness improvement due to the non integer order properties. Pre shaping approach is used to reduce system vibration in motion control. Desired systems inputs are altered so that the system finishes the requested move without residual vibration. This technique, developed by N.C. Singer and W.P.Seering, is used for flexible structure control, particularly in the aerospace field. In a previous work, this method was extended for explicit fractional derivative systems and applied to second generation CRONE control, the robustness was also studied. CRONE (the French acronym of C ommande Robuste d'Ordre Non Entier ) control system design is a frequency-domain based methodology using complex fractional integration.

  8. Realistic camera noise modeling with application to improved HDR synthesis

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Aelterman, Jan; Pižurica, Aleksandra; Philips, Wilfried

    2012-12-01

    Due to the ongoing miniaturization of digital camera sensors and the steady increase of the "number of megapixels", individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise, only limited attention has gone to more realistic digital camera noise models. In this article, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, post-processing,.. We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range (LDR) exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is near-optimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings.

  9. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  10. Photometric Lunar Surface Reconstruction

    Science.gov (United States)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  11. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  12. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  13. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  14. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  15. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  16. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  17. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  18. Camera calibration based on the back projection process

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  19. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  20. Dynamic imaging with coincidence gamma camera

    International Nuclear Information System (INIS)

    Elhmassi, Ahmed

    2008-01-01

    In this paper we develop a technique to calculate dynamic parameters from data acquired using gamma-camera PET (gc PET). Our method is based on an algorithm development for dynamic SPECT, which processes all decency projection data simultaneously instead of reconstructing a series of static images individually. The algorithm was modified to account for the extra data that is obtained with gc PET (compared with SPEC). The method was tested using simulated projection data for both a SPECT and a gc PET geometry. These studies showed the ability of the code to reconstruct simulated data with a varying range of half-lives. The accuracy of the algorithm was measured in terms of the reconstructed half-life and initial activity for the simulated object. The reconstruction of gc PET data showed improvement in half-life and activity compared to SPECT data of 23% and 20%, respectively (at 50 iterations). The gc PET algorithm was also tested using data from an experimental phantom and finally, applied to a clinical dataset, where the algorithm was further modified to deal with the situation where the activity in certain pixels decreases and then increases during the acquisition. (author)

  1. Resolution recovery for Compton camera using origin ensemble algorithm.

    Science.gov (United States)

    Andreyev, A; Celler, A; Ozsahin, I; Sitek, A

    2016-08-01

    Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions

  2. Tomographic image reconstruction using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Paschalis, P.; Giokaris, N.D.; Karabarbounis, A.; Loudos, G.K.; Maintas, D.; Papanicolas, C.N.; Spanoudaki, V.; Tsoumpas, Ch.; Stiliaris, E.

    2004-01-01

    A new image reconstruction technique based on the usage of an Artificial Neural Network (ANN) is presented. The most crucial factor in designing such a reconstruction system is the network architecture and the number of the input projections needed to reconstruct the image. Although the training phase requires a large amount of input samples and a considerable CPU time, the trained network is characterized by simplicity and quick response. The performance of this ANN is tested using several image patterns. It is intended to be used together with a phantom rotating table and the γ-camera of IASA for SPECT image reconstruction

  3. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  4. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  5. Path-dependent functions

    International Nuclear Information System (INIS)

    Khrapko, R.I.

    1985-01-01

    A uniform description of various path-dependent functions is presented with the help of expansion of the type of the Taylor series. So called ''path-integrals'' and ''path-tensor'' are introduced which are systems of many-component quantities whose values are defined for arbitrary paths in coordinated region of space in such a way that they contain a complete information on the path. These constructions are considered as elementary path-dependent functions and are used instead of power monomials in the usual Taylor series. Coefficients of such an expansion are interpreted as partial derivatives dependent on the order of the differentiations or else as nonstandard cavariant derivatives called two-point derivatives. Some examples of pathdependent functions are presented.Space curvature tensor is considered whose geometrica properties are determined by the (non-transitive) translator of parallel transport of a general type. Covariant operation leading to the ''extension'' of tensor fiels is pointed out

  6. Iterated Leavitt Path Algebras

    International Nuclear Information System (INIS)

    Hazrat, R.

    2009-11-01

    Leavitt path algebras associate to directed graphs a Z-graded algebra and in their simplest form recover the Leavitt algebras L(1,k). In this note, we introduce iterated Leavitt path algebras associated to directed weighted graphs which have natural ± Z grading and in their simplest form recover the Leavitt algebras L(n,k). We also characterize Leavitt path algebras which are strongly graded. (author)

  7. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  8. Omnidirectional sparse visual path following with occlusion-robust feature tracking

    OpenAIRE

    Goedemé, Toon; Tuytelaars, Tinne; Van Gool, Luc; Vanacker, Gerolf; Nuttin, Marnix

    2005-01-01

    Goedemé T., Tuytelaars T., Van Gool L., Vanacker G., Nuttin M., ''Omnidirectional sparse visual path following with occlusion-robust feature tracking'', Proceedings 6th workshop on omnidirectional vision, camera networks and non-classical cameras, 8 pp., October 21, 2005, Beijing, China.

  9. Pulled Motzkin paths

    Energy Technology Data Exchange (ETDEWEB)

    Janse van Rensburg, E J, E-mail: rensburg@yorku.c [Department of Mathematics and Statistics, York University, Toronto, ON, M3J 1P3 (Canada)

    2010-08-20

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) {yields} f as f {yields} {infinity}, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) {yields} 2f as f {yields} {infinity}, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  10. Pulled Motzkin paths

    Science.gov (United States)

    Janse van Rensburg, E. J.

    2010-08-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  11. Pulled Motzkin paths

    International Nuclear Information System (INIS)

    Janse van Rensburg, E J

    2010-01-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  12. Aircraft path planning for optimal imaging using dynamic cost functions

    Science.gov (United States)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  13. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  14. Multi-Dimensional Path Queries

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    1998-01-01

    to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments......We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...

  15. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    DEFF Research Database (Denmark)

    Kristoffersen, Miklas Strøm; Dueholm, Jacob Velling; Gade, Rikke

    2016-01-01

    and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm......The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions...

  16. Time-of-flight cameras principles, methods and applications

    CERN Document Server

    Hansard, Miles; Choi, Ouk; Horaud, Radu

    2012-01-01

    Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in order to increase the scene coverage, and to combine the depth data with images from several colour came

  17. Unique Path Partitions

    DEFF Research Database (Denmark)

    Bessenrodt, Christine; Olsson, Jørn Børling; Sellers, James A.

    2013-01-01

    We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions.......We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions....

  18. Hamiltonian path integrals

    International Nuclear Information System (INIS)

    Prokhorov, L.V.

    1982-01-01

    The properties of path integrals associated with the allowance for nonstandard terms reflecting the operator nature of the canonical variables are considered. Rules for treating such terms (''equivalence rules'') are formulated. Problems with a boundary, the behavior of path integrals under canonical transformations, and the problem of quantization of dynamical systems with constraints are considered in the framework of the method

  19. Cerebral imaging using 68Ga DTPA and the U.C.S.F. multiwire proportional chamber positron camera

    International Nuclear Information System (INIS)

    Hattner, R.S.; Lim, C.B.; Swann, S.J.; Kaufman, L.; Perez-Mendez, V.; Chu, D.; Huberty, J.P.; Price, D.C.; Wilson, C.B.

    1975-12-01

    A multiwire proportional chamber positron camera consisting of four 48 x 48 cm 2 detectors linked to a small digital computer has been designed, constructed, and characterized. Initial clinical application to brain imaging using 68 Ga DTPA in 10 patients with brain tumors is described. Tomographic image reconstruction is accomplished by an algorithm determining the intersection of the annihilation photon paths in planes of interest. Final image processing utilizes uniformity correction, simple thresholding, and smoothing. The positron brain images were compared to conventional scintillation brain scans and x-ray computerized axial tomograms (CAT) in each case. The positron studies have shown significant mitigation of confusing superficial activity resulting from craniotomy in comparison to conventional brain scans. Central necrosis of lesions observed in the positron images, but not in the conventional scans, has been confirmed in CAT. Modifications of the camera are being implemented to improve image quality, and these changes combined with the tomography inherent in the positron scans are anticipated to result in images superior in information content to conventional brain scans

  20. Vision-based path following using the 1D trifocal tensor

    CSIR Research Space (South Africa)

    Sabatta, D

    2013-05-01

    Full Text Available In this paper we present a vision-based path following algorithm for a non-holonomic wheeled platform capable of keeping the vehicle on a desired path using only a single camera. The algorithm is suitable for teach and replay or leader...

  1. 3D reconstruction based on light field images

    Science.gov (United States)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  2. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  3. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  4. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  5. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  6. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  7. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  8. Plasma tomographic reconstruction from tangentially viewing camera with background subtraction

    Czech Academy of Sciences Publication Activity Database

    Odstrčil, M.; Mlynář, Jan; Weinzettl, Vladimír; Háček, Pavel; Odstrčil, T.; Verdoolaege, G.; Berta, M.; Szabolics, T.; Bencze, A.

    2014-01-01

    Roč. 85, č. 1 (2014), 013509-013509 ISSN 0034-6748 R&D Projects: GA ČR GAP205/10/2055 Institutional support: RVO:61389021 Keywords : X-ray tomography * edge turbulence * tokamak * NSTX * TCV * COMPASS Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.614, year: 2014 http://scitation.aip.org/content/aip/journal/rsi/85/1/10.1063/1.4862652

  9. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  10. Path integration quantization

    International Nuclear Information System (INIS)

    DeWitt-Morette, C.

    1983-01-01

    Much is expected of path integration as a quantization procedure. Much more is possible if one recognizes that path integration is at the crossroad of stochastic and differential calculus and uses the full power of both stochastic and differential calculus in setting up and computing path integrals. In contrast to differential calculus, stochastic calculus has only comparatively recently become an instrument of thought. It has nevertheless already been used in a variety of challenging problems, for instance in the quantization problem. The author presents some applications of the stochastic scheme. (Auth.)

  11. Two dimensional simplicial paths

    International Nuclear Information System (INIS)

    Piso, M.I.

    1994-07-01

    Paths on the R 3 real Euclidean manifold are defined as 2-dimensional simplicial strips which are orbits of the action of a discrete one-parameter group. It is proven that there exists at least one embedding of R 3 in the free Z-module generated by S 2 (x 0 ). The speed is defined as the simplicial derivative of the path. If mass is attached to the simplex, the free Lagrangian is proportional to the width of the path. In the continuum limit, the relativistic form of the Lagrangian is recovered. (author). 7 refs

  12. Zero-Slack, Noncritical Paths

    Science.gov (United States)

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  13. Remote removal of an obstruction from FFTF [Fast Flux Test Facility] in-service inspection camera track

    International Nuclear Information System (INIS)

    Gibbons, P.W.

    1990-11-01

    Remote techniques and special equipment were used to clear the path of a closed-circuit television camera system that travels on a monorail track around the reactor vessel support arm structure. A tangle of wire-wrapped instrumentation tubing had been inadvertently inserted through a dislocated guide-tube expansion joint and into the camera track area. An externally driven auger device, mounted on the track ahead of the camera to view the procedure, was used to retrieve the tubing. 6 figs

  14. Applying image quality in cell phone cameras: lens distortion

    Science.gov (United States)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  15. A fast algorithm for computer aided collimation gamma camera (CACAO)

    Science.gov (United States)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.

    2000-08-01

    The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.

  16. Dual-camera design for coded aperture snapshot spectral imaging.

    Science.gov (United States)

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  17. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  18. Groebner Finite Path Algebras

    OpenAIRE

    Leamer, Micah J.

    2004-01-01

    Let K be a field and Q a finite directed multi-graph. In this paper I classify all path algebras KQ and admissible orders with the property that all of their finitely generated ideals have finite Groebner bases. MS

  19. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  20. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  1. Path planning in changeable environments

    NARCIS (Netherlands)

    Nieuwenhuisen, D.

    2007-01-01

    This thesis addresses path planning in changeable environments. In contrast to traditional path planning that deals with static environments, in changeable environments objects are allowed to change their configurations over time. In many cases, path planning algorithms must facilitate quick

  2. The effect of acquisition interval and spatial resolution on dynamic cardiac imaging with a stationary SPECT camera

    International Nuclear Information System (INIS)

    Roberts, J; Maddula, R; Clackdoyle, R; DiBella, E; Fu, Z

    2007-01-01

    The current SPECT scanning paradigm that acquires images by slow rotation of multiple detectors in body-contoured orbits around the patient is not suited to the rapid collection of tomographically complete data. During rapid image acquisition, mechanical and patient safety constraints limit the detector orbit to circular paths at increased distances from the patient, resulting in decreased spatial resolution. We consider a novel dynamic rotating slant-hole (DyRoSH) SPECT camera that can collect full tomographic data every 2 s, employing three stationary detectors mounted with slant-hole collimators that rotate at 30 rpm. Because the detectors are stationary, they can be placed much closer to the patient than is possible with conventional SPECT systems. We propose that the decoupling of the detector position from the mechanics of rapid image acquisition offers an additional degree of freedom which can be used to improve accuracy in measured kinetic parameter estimates. With simulations and list-mode reconstructions, we consider the effects of different acquisition intervals on dynamic cardiac imaging, comparing a conventional three detector SPECT system with the proposed DyRoSH SPECT system. Kinetic parameters of a two-compartment model of myocardial perfusion for technetium-99m-teboroxime were estimated. When compared to a conventional SPECT scanner for the same acquisition periods, the proposed DyRoSH system shows equivalent or reduced bias or standard deviation values for the kinetic parameter estimates. The DyRoSH camera with a 2 s acquisition period does not show any improvement compared to a DyRoSH camera with a 10 s acquisition period

  3. The effect of acquisition interval and spatial resolution on dynamic cardiac imaging with a stationary SPECT camera

    Science.gov (United States)

    Roberts, J.; Maddula, R.; Clackdoyle, R.; Di Bella, E.; Fu, Z.

    2007-08-01

    The current SPECT scanning paradigm that acquires images by slow rotation of multiple detectors in body-contoured orbits around the patient is not suited to the rapid collection of tomographically complete data. During rapid image acquisition, mechanical and patient safety constraints limit the detector orbit to circular paths at increased distances from the patient, resulting in decreased spatial resolution. We consider a novel dynamic rotating slant-hole (DyRoSH) SPECT camera that can collect full tomographic data every 2 s, employing three stationary detectors mounted with slant-hole collimators that rotate at 30 rpm. Because the detectors are stationary, they can be placed much closer to the patient than is possible with conventional SPECT systems. We propose that the decoupling of the detector position from the mechanics of rapid image acquisition offers an additional degree of freedom which can be used to improve accuracy in measured kinetic parameter estimates. With simulations and list-mode reconstructions, we consider the effects of different acquisition intervals on dynamic cardiac imaging, comparing a conventional three detector SPECT system with the proposed DyRoSH SPECT system. Kinetic parameters of a two-compartment model of myocardial perfusion for technetium-99m-teboroxime were estimated. When compared to a conventional SPECT scanner for the same acquisition periods, the proposed DyRoSH system shows equivalent or reduced bias or standard deviation values for the kinetic parameter estimates. The DyRoSH camera with a 2 s acquisition period does not show any improvement compared to a DyRoSH camera with a 10 s acquisition period.

  4. OPERA goes on camera

    CERN Multimedia

    2007-01-01

    OPERA, the experiment which uses the neutrino beam of CERN’s CNGS facility, has delivered its first neutrino "photos". The core of the detector has been commissioned and has produced images of events resulting from neutrino collisions. The reconstruction of the core (a few cubic millimetres!) of a neutrino interaction at OPERA. The neutrino arriving from the left of the image has interacted with the lead of a brick, producing various particles identifiable by their tracks visible in the emulsion.The snapshot is tiny but it was greeted with enthusiasm by the physicists of OPERA. On 2 October, for the first time, the experiment at the Gran Sasso Laboratory in Italy "photographed" an event produced by the beam of neutrinos sent from CERN, 732 kilometres away. One of the 60,000 photosensitive bricks already installed at the heart of the experiment had produced its first particle track. The commissioning of the OPERA experiment began la...

  5. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  6. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  7. Climate Reconstructions

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...

  8. Reconstructing building mass models from UAV images

    KAUST Repository

    Li, Minglei; Nan, Liangliang; Smith, Neil; Wonka, Peter

    2015-01-01

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first

  9. Longitudinal and transverse digital image reconstruction with a tomographic scanner

    International Nuclear Information System (INIS)

    Pickens, D.R.; Price, R.R.; Erickson, J.J.; Patton, J.A.; Partain, C.L.; Rollo, F.D.

    1981-01-01

    A Siemens Gammasonics PHO/CON-192 Multiplane Imager is interfaced to a digital computer for the purpose of performing tomographic reconstructions from the data collected during a single scan. Data from the two moving gamma cameras as well as camera position information are sent to the computer by an interface designed in the authors' laboratory. Backprojection reconstruction is implemented by the computer. Longitudinal images in whole-body format as well as smaller formats are reconstructed for up to six planes simultaneously from the list mode data. Transverse reconstructions are demonstrated for 201 T1 myocardial scans. Post-reconstruction deconvolution processing to remove the blur artifact (characteristic of focal plane tomography) is applied to a multiplane phantom. Digital data acquisition of data and reconstruction of images are practical, and can extend the usefulness of the machine when compared with the film output (author)

  10. Hubble Space Telescope, Faint Object Camera

    Science.gov (United States)

    1981-01-01

    This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.

  11. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  12. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates

  13. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  14. A multi-criteria approach to camera motion design for volume data animation.

    Science.gov (United States)

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  15. Light field reconstruction robust to signal dependent noise

    Science.gov (United States)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  16. Versatility of the CFR algorithm for limited angle reconstruction

    International Nuclear Information System (INIS)

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1990-01-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant

  17. Quivers of Bound Path Algebras and Bound Path Coalgebras

    Directory of Open Access Journals (Sweden)

    Dr. Intan Muchtadi

    2010-09-01

    Full Text Available bras and coalgebras can be represented as quiver (directed graph, and from quiver we can construct algebras and coalgebras called path algebras and path coalgebras. In this paper we show that the quiver of a bound path coalgebra (resp. algebra is the dual quiver of its bound path algebra (resp. coalgebra.

  18. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  19. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  20. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  1. Distance weighting for improved tomographic reconstructions

    International Nuclear Information System (INIS)

    Koeppe, R.A.; Holden, J.E.

    1984-01-01

    An improved method for the reconstruction of emission computed axial tomography images has been developed. The method is a modification of filtered back-projection, where the back projected values are weighted to reflect the loss of formation, with distance from the camera, which is inherent in gamma camera imaging. This information loss is a result of: loss of spatial resolution with distance, attenuation, and scatter. The weighting scheme can best be described by considering the contributions of any two opposing views to the reconstruction image pixels. The weight applied to the projections of one view is set to equal the relative amount of the original activity that was initially received in that projection, assuming a uniform attenuating medium. This yields a weighting value which is a function of distance into the image with a value of one for pixels ''near the camera'', a value of .5 at the image center, and a value of zero on the opposite side. Tomographic reconstructions produced with this method show improved spatial resolution when compared to conventional 360 0 reconstructions. The improvement is in the tangential direction, where simulations have indicated a FWHM improvement of 1 to 1.5 millimeters. The resolution in the radial direction is essentially the same for both methods. Visual inspection of the reconstructed images show improved resolution and contrast

  2. Paths correlation matrix.

    Science.gov (United States)

    Qian, Weixian; Zhou, Xiaojun; Lu, Yingcheng; Xu, Jiang

    2015-09-15

    Both the Jones and Mueller matrices encounter difficulties when physically modeling mixed materials or rough surfaces due to the complexity of light-matter interactions. To address these issues, we derived a matrix called the paths correlation matrix (PCM), which is a probabilistic mixture of Jones matrices of every light propagation path. Because PCM is related to actual light propagation paths, it is well suited for physical modeling. Experiments were performed, and the reflection PCM of a mixture of polypropylene and graphite was measured. The PCM of the mixed sample was accurately decomposed into pure polypropylene's single reflection, pure graphite's single reflection, and depolarization caused by multiple reflections, which is consistent with the theoretical derivation. Reflection parameters of rough surface can be calculated from PCM decomposition, and the results fit well with the theoretical calculations provided by the Fresnel equations. These theoretical and experimental analyses verify that PCM is an efficient way to physically model light-matter interactions.

  3. Leavitt path algebras

    CERN Document Server

    Abrams, Gene; Siles Molina, Mercedes

    2017-01-01

    This book offers a comprehensive introduction by three of the leading experts in the field, collecting fundamental results and open problems in a single volume. Since Leavitt path algebras were first defined in 2005, interest in these algebras has grown substantially, with ring theorists as well as researchers working in graph C*-algebras, group theory and symbolic dynamics attracted to the topic. Providing a historical perspective on the subject, the authors review existing arguments, establish new results, and outline the major themes and ring-theoretic concepts, such as the ideal structure, Z-grading and the close link between Leavitt path algebras and graph C*-algebras. The book also presents key lines of current research, including the Algebraic Kirchberg Phillips Question, various additional classification questions, and connections to noncommutative algebraic geometry. Leavitt Path Algebras will appeal to graduate students and researchers working in the field and related areas, such as C*-algebras and...

  4. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  5. A flexible geometry Compton camera for industrial gamma ray imaging

    International Nuclear Information System (INIS)

    Royle, G.J.; Speller, R.D.

    1996-01-01

    A design for a Compton scatter camera is proposed which is applicable to gamma ray imaging within limited access industrial sites. The camera consists of a number of single element detectors arranged in a small cluster. Coincidence circuitry enables the detectors to act as a scatter camera. Positioning the detector cluster at various locations within the site, and subsequent reconstruction of the recorded data, allows an image to be obtained. The camera design allows flexibility to cater for limited space or access simply by positioning the detectors in the optimum geometric arrangement within the space allowed. The quality of the image will be limited but imaging could still be achieved in regions which are otherwise inaccessible. Computer simulation algorithms have been written to optimize the various parameters involved, such as geometrical arrangement of the detector cluster and the positioning of the cluster within the site, and to estimate the performance of such a device. Both scintillator and semiconductor detectors have been studied. A prototype camera has been constructed which operates three small single element detectors in coincidence. It has been tested in a laboratory simulation of an industrial site. This consisted of a small room (2 m wide x 1 m deep x 2 m high) into which the only access points were two 6 cm diameter holes in a side wall. Simple images of Cs-137 sources have been produced. The work described has been done on behalf of BNFL for applications at their Sellafield reprocessing plant in the UK

  6. The Thinnest Path Problem

    Science.gov (United States)

    2016-07-22

    be reduced to TP in -D UDH for any . We then show that the 2-D disk hypergraph constructed in the proof of Theorem 1 can be modified to an exposed...transmission range that induces hy- peredge , i.e., (3) GAO et al.: THINNEST PATH PROBLEM 1181 Theorem 5 shows that the covered area of the path...representation of (the two hyperedges rooted at from the example given in Fig. 6 are illustrated in green and blue, respectively). step, we show in this

  7. Path dependence and creation

    DEFF Research Database (Denmark)

    Garud, Raghu; Karnøe, Peter

    This edited volume stems from a conference held in Copenhagen that the authors ran in August of 1997. The authors, aware of the recent work in evolutionary theory and the science of chaos and complexity, challenge the sometimes deterministic flavour of this work. They are interested in uncovering...... the place of agency in these theories that take history so seriously. In the end, they are as interested in path creation and destruction as they are in path dependence. This book is compiled of both theoretical and empirical writing. It shows relatively well-known industries such as the automobile...

  8. Reparametrization in the path integral

    International Nuclear Information System (INIS)

    Storchak, S.N.

    1983-01-01

    The question of the invariance of a measure in the n-dimensional path integral under the path reparametrization is considered. The non-invariance of the measure through the jacobian is suggeste. After the path integral reparametrization the representatioq for the Green's function of the Hamilton operator in terms of the path integral with the classical Hamiltonian has been obtained

  9. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  10. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  11. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  12. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  13. Resolution recovery for Compton camera using origin ensemble algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Andreyev, A. [Philips Healthcare, Highland Heights, Ohio 44143 (United States); Celler, A. [Medical Imaging Research Group, University of British Columbia and Vancouver Coastal Health Research Institute, Vancouver, BC V5Z 1M9 (Canada); Ozsahin, I.; Sitek, A., E-mail: sarkadiu@gmail.com [Gordon Center for Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 and Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2016-08-15

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image

  14. Resolution recovery for Compton camera using origin ensemble algorithm

    International Nuclear Information System (INIS)

    Andreyev, A.; Celler, A.; Ozsahin, I.; Sitek, A.

    2016-01-01

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image

  15. MEASURING PATH DEPENDENCY

    Directory of Open Access Journals (Sweden)

    Peter Juhasz

    2017-03-01

    Full Text Available While risk management gained popularity during the last decades even some of the basic risk types are still far out of focus. One of these is path dependency that refers to the uncertainty of how we reach a certain level of total performance over time. While decision makers are careful in accessing how their position will look like the end of certain periods, little attention is given how they will get there through the period. The uncertainty of how a process will develop across a shorter period of time is often “eliminated” by simply choosing a longer planning time interval, what makes path dependency is one of the most often overlooked business risk types. After reviewing the origin of the problem we propose and compare seven risk measures to access path. Traditional risk measures like standard deviation of sub period cash flows fail to capture this risk type. We conclude that in most cases considering the distribution of the expected cash flow effect caused by the path dependency may offer the best method, but we may need to use several measures at the same time to include all the optimisation limits of the given firm

  16. Vaginal reconstruction

    International Nuclear Information System (INIS)

    Lesavoy, M.A.

    1985-01-01

    Vaginal reconstruction can be an uncomplicated and straightforward procedure when attention to detail is maintained. The Abbe-McIndoe procedure of lining the neovaginal canal with split-thickness skin grafts has become standard. The use of the inflatable Heyer-Schulte vaginal stent provides comfort to the patient and ease to the surgeon in maintaining approximation of the skin graft. For large vaginal and perineal defects, myocutaneous flaps such as the gracilis island have been extremely useful for correction of radiation-damaged tissue of the perineum or for the reconstruction of large ablative defects. Minimal morbidity and scarring ensue because the donor site can be closed primarily. With all vaginal reconstruction, a compliant patient is a necessity. The patient must wear a vaginal obturator for a minimum of 3 to 6 months postoperatively and is encouraged to use intercourse as an excellent obturator. In general, vaginal reconstruction can be an extremely gratifying procedure for both the functional and emotional well-being of patients

  17. ACL Reconstruction

    Science.gov (United States)

    ... in moderate exercise and recreational activities, or play sports that put less stress on the knees. ACL reconstruction is generally recommended if: You're an athlete and want to continue in your sport, especially if the sport involves jumping, cutting or ...

  18. Afghanistan's Path to Reconstruction: Obstacles, Challenges, and Issues for Congress

    National Research Council Canada - National Science Library

    Margesson, Rhoda

    2002-01-01

    .... The case of Afghanistan may present a special category of crisis, in which the United States and others play a significant role in the war on terrorism while simultaneously providing humanitarian...

  19. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  20. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  1. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  2. Ectomography - a tomographic method for gamma camera imaging

    International Nuclear Information System (INIS)

    Dale, S.; Edholm, P.E.; Hellstroem, L.G.; Larsson, S.

    1985-01-01

    In computerised gamma camera imaging the projections are readily obtained in digital form, and the number of picture elements may be relatively few. This condition makes emission techniques suitable for ectomography - a tomographic technique for directly visualising arbitrary sections of the human body. The camera rotates around the patient to acquire different projections in a way similar to SPECT. This method differs from SPECT, however, in that the camera is placed at an angle to the rotational axis, and receives two-dimensional, rather than one-dimensional, projections. Images of body sections are reconstructed by digital filtration and combination of the acquired projections. The main advantages of ectomography - a high and uniform resolution, a low and uniform attenuation and a high signal-to-noise ratio - are obtained when imaging sections close and parallel to a body surface. The filtration eliminates signals representing details outside the section and gives the section a certain thickness. Ectomographic transverse images of a line source and of a human brain have been reconstructed. Details within the sections are correctly visualised and details outside are effectively eliminated. For comparison, the same sections have been imaged with SPECT. (author)

  3. Nonadiabatic transition path sampling

    International Nuclear Information System (INIS)

    Sherman, M. C.; Corcelli, S. A.

    2016-01-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  4. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  5. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  6. PATHS groundwater hydrologic model

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  7. Ion movie camera for particle-beam-fusion experiments

    International Nuclear Information System (INIS)

    Stygar, W.A.; Mix, L.P.; Leeper, R.J.; Maenchen, J.; Wenger, D.F.; Mattson, C.R.; Muron, D.J.

    1992-01-01

    A camera with a 3 ns time resolution and a continuous (>100 ns) record length has been developed to image a 10 12 --10 13 W/cm 2 ion beam for inertial-confinement-fusion experiments. A thin gold Rutherford-scattering foil placed in the path of the beam scatters ions into the camera. The foil is in a near-optimized scattering geometry and reduces the beam intensity∼seven orders of magnitude. The scattered ions are pinhole imaged onto a 2D array of 39 p-i-n diode detectors; outputs are recorded on LeCroy 6880 transient-waveform digitizers. The waveforms are analyzed and combined to produce a 39-pixel movie which can be displayed on an image processor to provide time-resolved horizontal- and vertical-focusing information

  8. System Architecture of the Dark Energy Survey Camera Readout Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Theresa; /FERMILAB; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; /Barcelona, IFAE; Chappa, Steve; /Fermilab; de Vicente, Juan; /Madrid, CIEMAT; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; /Fermilab; Martinez, Gustavo; /Madrid, CIEMAT; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  9. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  10. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  11. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  12. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  13. Maxillary reconstruction

    Directory of Open Access Journals (Sweden)

    Brown James

    2007-12-01

    Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.

  14. The Sydney University PAPA camera

    Science.gov (United States)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  15. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  16. Diagnostics and camera strobe timers for hydrogen pellet injectors

    International Nuclear Information System (INIS)

    Bauer, M.L.; Fisher, P.W.; Qualls, A.L.

    1993-01-01

    Hydrogen pellet injectors have been used to fuel fusion experimental devices for the last decade. As part of developments to improve pellet production and velocity, various diagnostic devices were implemented, ranging from witness plates to microwave mass meters to high speed photography. This paper will discuss details of the various implementations of light sources, cameras, synchronizing electronics and other diagnostic systems developed at Oak Ridge for the Tritium Proof-of-Principle (TPOP) experiment at the Los Alamos National Laboratory's Tritium System Test Assembly (TSTA), a system built for the Oak Ridge Advanced Toroidal Facility (ATF), and the Tritium Pellet Injector (TPI) built for the Princeton Tokamak Fusion Test Reactor (TFTR). Although a number of diagnostic systems were implemented on each pellet injector, the emphasis here will be on the development of a synchronization system for high-speed photography using pulsed light sources, standard video cameras, and video recorders. This system enabled near real-time visualization of the pellet shape, size and flight trajectory over a wide range of pellet speeds and at one or two positions along the flight path. Additionally, the system provides synchronization pulses to the data system for pseudo points along the flight path, such as the estimated plasma edge. This was accomplished using an electronic system that took the time measured between sets of light gates, and generated proportionally delayed triggers for light source strobes and pseudo points. Systems were built with two camera stations, one located after the end of the barrel, and a second camera located closer to the main reactor vessel wall. Two or three light gates were used to sense pellet velocity and various spacings were implemented on the three experiments. Both analog and digital schemes were examined for implementing the delay system. A digital technique was chosen

  17. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  18. Characterization of a PET Camera Optimized for Prostate Imaging

    International Nuclear Information System (INIS)

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi, Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-01-01

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region

  19. Shortest Paths and Vehicle Routing

    DEFF Research Database (Denmark)

    Petersen, Bjørn

    This thesis presents how to parallelize a shortest path labeling algorithm. It is shown how to handle Chvátal-Gomory rank-1 cuts in a column generation context. A Branch-and-Cut algorithm is given for the Elementary Shortest Paths Problem with Capacity Constraint. A reformulation of the Vehicle...... Routing Problem based on partial paths is presented. Finally, a practical application of finding shortest paths in the telecommunication industry is shown....

  20. Oblique Multi-Camera Systems - Orientation and Dense Matching Issues

    Science.gov (United States)

    Rupnik, E.; Nex, F.; Remondino, F.

    2014-03-01

    The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.). The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  1. A triple GEM gamma camera for medical application

    Energy Technology Data Exchange (ETDEWEB)

    Anulli, F. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Balla, A. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Bencivenni, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Corradi, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); D' Ambrosio, C. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Domenici, D. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Felici, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Gatta, M. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Morone, M.C. [Dipartimento di Biopatologia e Diagnostica per immagini, Universita di Roma Tor Vergata (Italy); INFN - Sezione di Roma Tor Vergata (Italy); Murtas, F. [Laboratori Nazionali di Frascati INFN, Frascati (Italy)]. E-mail: fabrizio.murtas@lnf.infn.it; Schillaci, O. [Dipartimento di Biopatologia e Diagnostica per immagini, Universita di Roma Tor Vergata (Italy)

    2007-03-01

    A Gamma Camera for medical applications 10x10cm{sup 2} has been built using a triple GEM chamber prototype. The photon converters placed in front of the three GEM foils, has been realized with different technologies. The chamber, High Voltage supplied with a new active divider made in Frascati, is readout through 64 pads, 1mm{sup 2} wide, organized in a row of 8cm long, with LHCb ASDQ chip. This Gamma Camera can be used both for X-ray movie and PET-SPECT imaging; this chamber prototype is placed in a scanner system, creating images of 8x8cm{sup 2}. Several measurements have been performed using phantom and radioactive sources of Tc99m(140keV) and Na22(511keV). Results on spatial resolution and image reconstruction are presented.

  2. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  3. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  4. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  5. Rocket Flight Path

    Directory of Open Access Journals (Sweden)

    Jamie Waters

    2014-09-01

    Full Text Available This project uses Newton’s Second Law of Motion, Euler’s method, basic physics, and basic calculus to model the flight path of a rocket. From this, one can find the height and velocity at any point from launch to the maximum altitude, or apogee. This can then be compared to the actual values to see if the method of estimation is a plausible. The rocket used for this project is modeled after Bullistic-1 which was launched by the Society of Aeronautics and Rocketry at the University of South Florida.

  6. JAVA PathFinder

    Science.gov (United States)

    Mehhtz, Peter

    2005-01-01

    JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

  7. Hamiltonian path integrals

    International Nuclear Information System (INIS)

    Prokhorov, L.V.

    1982-01-01

    Problems related to consideration of operator nonpermutability in Hamiltonian path integral (HPI) are considered in the review. Integrals are investigated using trajectories in configuration space (nonrelativistic quantum mechanics). Problems related to trajectory integrals in HPI phase space are discussed: the problem of operator nonpermutability consideration (extra terms problem) and corresponding equivalence rules; ambiguity of HPI usual recording; transition to curvilinear coordinates. Problem of quantization of dynamical systems with couplings has been studied. As in the case of canonical transformations, quantization of the systems with couplings of the first kind requires the consideration of extra terms

  8. Path to Prosperity

    OpenAIRE

    Wolfowitz,Paul

    2006-01-01

    Paul Wolfowitz, President of the World Bank, discussed Singapore's remarkable progress along the road from poverty to prosperity which has also been discovered by many other countries in East Asia and around the world. He spoke of how each country must find its own path for people to pursue the same dreams of the chance to go to school, the security of a good job, and the ability to provide a better future for their children. Throughout the world, and importantly in the developing world, ther...

  9. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  10. Pseudo real-time coded aperture imaging system with intensified vidicon cameras

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.

    1977-01-01

    A coded image displayed on a TV monitor was used to directly reconstruct a decoded image. Both the coded and the decoded images were viewed with intensified vidicon cameras. The coded aperture was a 15-element nonredundant pinhole array. The coding and decoding were accomplished simultaneously during the scanning of a single 16-msec TV frame

  11. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  12. World nuclear energy paths

    International Nuclear Information System (INIS)

    Connolly, T.J.; Hansen, U.; Jaek, W.; Beckurts, K.H.

    1979-01-01

    In examing the world nuclear energy paths, the following assumptions were adopted: the world economy will grow somewhat more slowly than in the past, leading to reductions in electricity demand growth rates; national and international political impediments to the deployment of nuclear power will gradually disappear over the next few years; further development of nuclear power will proceed steadily, without serious interruption but with realistic lead times for the introduction of advanced technologies. Given these assumptions, this paper attempts a study of possible world nuclear energy developments, disaggregated on a regional and national basis. The scenario technique was used and a few alternative fuel-cycle scenarios were developed. Each is an internally consistent model of technically and economically feasible paths to the further development of nuclear power in an aggregate of individual countries and regions of the world. The main purpose of this modeling exercise was to gain some insight into the probable international locations of reactors and other nuclear facilities, the future requirements for uranium and for fuel-cycle services, and the problems of spent-fuel storage and waste management. The study also presents an assessment of the role that nuclear power might actually play in meeting future world energy demand

  13. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    Science.gov (United States)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  14. Ceres Photometry and Albedo from Dawn Framing Camera Images

    Science.gov (United States)

    Schröder, S. E.; Mottola, S.; Keller, H. U.; Li, J.-Y.; Matz, K.-D.; Otto, K.; Roatsch, T.; Stephan, K.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    The Dawn spacecraft is in orbit around dwarf planet Ceres. The onboard Framing Camera (FC) [1] is mapping the surface through a clear filter and 7 narrow-band filters at various observational geometries. Generally, Ceres' appearance in these images is affected by shadows and shading, effects which become stronger for larger solar phase angles, obscuring the intrinsic reflective properties of the surface. By means of photometric modeling we attempt to remove these effects and reconstruct the surface albedo over the full visible wavelength range. Knowledge of the albedo distribution will contribute to our understanding of the physical nature and composition of the surface.

  15. A novel PET camera calibration method

    International Nuclear Information System (INIS)

    Yerian, K.; Hartz, R.K.; Gaeta, J.M.; Marani, S.; Wong, W.H.; Bristow, D.; Mullani, N.A.

    1985-01-01

    Reconstructed time-of-flight PET images must be corrected for differences in the sensitivity of detector pairs, variations in the TOF gain between groups of detector pairs, and for shifts in the detector-pair timing windows. These calibration values are measured for each detector-pair coincidence line using a positron emitting ring source. The quality of the measured value for a detector pair depends on its statistics. To improve statistics, algorithms are developed which derive individual detector calibration values for efficiency, TOF offsets, and TOF fwhm from the raw detector-pair measurements. For the author's current TOFPET system there are 162,000 detector pairs which are reduced to 720 individual detector values. The data for individual detectors are subsequently recombined, improving the statistical quality of the resultant detector-pair values. In addition, storage requirements are significantly reduced by saving the individual detector values. These parameters are automatically evaluated on a routine basis and problem detectors reported for adjustment or replacement. Decomposing the detector-pair measurements into individual detector values significantly improves the calibration values used to correct camera artifacts in PET imaging

  16. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Finch, S.M.; McMakin, A.H.

    1991-04-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from released to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; and, environmental pathways and dose estimates

  17. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates

  18. Automatic Camera Orientation and Structure Recovery with Samantha

    Science.gov (United States)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  19. AUTOMATIC CAMERA ORIENTATION AND STRUCTURE RECOVERY WITH SAMANTHA

    Directory of Open Access Journals (Sweden)

    R. Gherardi

    2012-09-01

    Full Text Available SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  20. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  1. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Directory of Open Access Journals (Sweden)

    Gustavo R D Bernardina

    Full Text Available Action sport cameras (ASC are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720 and 1.5mm (1920×1080. The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  2. TREE STEM RECONSTRUCTION USING VERTICAL FISHEYE IMAGES: A PRELIMINARY STUDY

    Directory of Open Access Journals (Sweden)

    A. Berveglieri

    2016-06-01

    Full Text Available A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.

  3. Paths of Cultural Systems

    Directory of Open Access Journals (Sweden)

    Paul Ballonoff

    2017-12-01

    Full Text Available A theory of cultural structures predicts the objects observed by anthropologists. We here define those which use kinship relationships to define systems. A finite structure we call a partially defined quasigroup (or pdq, as stated by Definition 1 below on a dictionary (called a natural language allows prediction of certain anthropological descriptions, using homomorphisms of pdqs onto finite groups. A viable history (defined using pdqs states how an individual in a population following such history may perform culturally allowed associations, which allows a viable history to continue to survive. The vector states on sets of viable histories identify demographic observables on descent sequences. Paths of vector states on sets of viable histories may determine which histories can exist empirically.

  4. Propagators and path integrals

    Energy Technology Data Exchange (ETDEWEB)

    Holten, J.W. van

    1995-08-22

    Path-integral expressions for one-particle propagators in scalar and fermionic field theories are derived, for arbitrary mass. This establishes a direct connection between field theory and specific classical point-particle models. The role of world-line reparametrization invariance of the classical action and the implementation of the corresponding BRST-symmetry in the quantum theory are discussed. The presence of classical world-line supersymmetry is shown to lead to an unwanted doubling of states for massive spin-1/2 particles. The origin of this phenomenon is traced to a `hidden` topological fermionic excitation. A different formulation of the pseudo-classical mechanics using a bosonic representation of {gamma}{sub 5} is shown to remove these extra states at the expense of losing manifest supersymmetry. (orig.).

  5. innovation path exploration

    Directory of Open Access Journals (Sweden)

    Li Jian

    2016-01-01

    Full Text Available The world has entered the information age, all kinds of information technologies such as cloud technology, big data technology are in rapid development, and the “Internet plus” appeared. The main purpose of “Internet plus” is to provide an opportunity for the further development of the enterprise, the enterprise technology, business and other aspects of factors combine. For enterprises, grasp the “Internet plus” the impact of the market economy will undoubtedly pave the way for the future development of enterprises. This paper will be on the innovation path of the enterprise management “Internet plus” era tied you study, hope to be able to put forward some opinions and suggestions.

  6. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  7. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  8. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  9. Improved positron emission tomography camera

    International Nuclear Information System (INIS)

    Mullani, N.A.

    1986-01-01

    A positron emission tomography camera having a plurality of rings of detectors positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom, and a plurality of scintillation crystals positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring may be offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. (author)

  10. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  11. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  12. PET reconstruction

    International Nuclear Information System (INIS)

    O'Sullivan, F.; Pawitan, Y.; Harrison, R.L.; Lewellen, T.K.

    1990-01-01

    In statistical terms, filtered backprojection can be viewed as smoothed Least Squares (LS). In this paper, the authors report on improvement in LS resolution by: incorporating locally adaptive smoothers, imposing positivity and using statistical methods for optimal selection of the resolution parameter. The resulting algorithm has high computational efficiency relative to more elaborate Maximum Likelihood (ML) type techniques (i.e. EM with sieves). Practical aspects of the procedure are discussed in the context of PET and illustrations with computer simulated and real tomograph data are presented. The relative recovery coefficients for a 9mm sphere in a computer simulated hot-spot phantom range from .3 to .6 when the number of counts ranges from 10,000 to 640,000 respectively. The authors will also present results illustrating the relative efficacy of ML and LS reconstruction techniques

  13. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  14. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  15. Structured Light-Based 3D Reconstruction System for Plants

    OpenAIRE

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud regi...

  16. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  17. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  18. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  19. Path integral in Snyder space

    Energy Technology Data Exchange (ETDEWEB)

    Mignemi, S., E-mail: smignemi@unica.it [Dipartimento di Matematica e Informatica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato (Italy); Štrajn, R. [Dipartimento di Matematica e Informatica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato (Italy)

    2016-04-29

    The definition of path integrals in one- and two-dimensional Snyder space is discussed in detail both in the traditional setting and in the first-order formalism of Faddeev and Jackiw. - Highlights: • The definition of the path integral in Snyder space is discussed using phase space methods. • The same result is obtained in the first-order formalism of Faddeev and Jackiw. • The path integral formulation of the two-dimensional Snyder harmonic oscillator is outlined.

  20. Path integral in Snyder space

    International Nuclear Information System (INIS)

    Mignemi, S.; Štrajn, R.

    2016-01-01

    The definition of path integrals in one- and two-dimensional Snyder space is discussed in detail both in the traditional setting and in the first-order formalism of Faddeev and Jackiw. - Highlights: • The definition of the path integral in Snyder space is discussed using phase space methods. • The same result is obtained in the first-order formalism of Faddeev and Jackiw. • The path integral formulation of the two-dimensional Snyder harmonic oscillator is outlined.

  1. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  2. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  3. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  4. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)

    2011-02-07

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  5. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Science.gov (United States)

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  6. Breast Reconstruction After Mastectomy

    Science.gov (United States)

    ... Cancer Prevention Genetics of Breast & Gynecologic Cancers Breast Cancer Screening Research Breast Reconstruction After Mastectomy On This Page What is breast reconstruction? How do surgeons use implants to reconstruct a woman’s breast? How do surgeons ...

  7. Breast reconstruction - implants

    Science.gov (United States)

    Breast implants surgery; Mastectomy - breast reconstruction with implants; Breast cancer - breast reconstruction with implants ... harder to find a tumor if your breast cancer comes back. Getting breast implants does not take as long as breast reconstruction ...

  8. Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras

    OpenAIRE

    Mukaigawa, Yasuhiro; Genda, Daisuke; Yamane, Ryo; Shakunaga, Takeshi

    2003-01-01

    A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the...

  9. Graph reconstruction with a betweenness oracle

    DEFF Research Database (Denmark)

    Abrahamsen, Mikkel; Bodwin, Greg; Rotenberg, Eva

    2016-01-01

    Graph reconstruction algorithms seek to learn a hidden graph by repeatedly querying a blackbox oracle for information about the graph structure. Perhaps the most well studied and applied version of the problem uses a distance oracle, which can report the shortest path distance between any pair...... of nodes. We introduce and study the betweenness oracle, where bet(a, m, z) is true iff m lies on a shortest path between a and z. This oracle is strictly weaker than a distance oracle, in the sense that a betweenness query can be simulated by a constant number of distance queries, but not vice versa...

  10. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  11. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  12. A simple data loss model for positron camera systems

    International Nuclear Information System (INIS)

    Eriksson, L.; Dahlbom, M.

    1994-01-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system

  13. IMAGE ACQUISITION CONSTRAINTS FOR PANORAMIC FRAME CAMERA IMAGING

    Directory of Open Access Journals (Sweden)

    H. Kauhanen

    2012-07-01

    Full Text Available The paper describes an approach to quantify the amount of projective error produced by an offset of projection centres in a panoramic imaging workflow. We have limited this research to such panoramic workflows in which several sub-images using planar image sensor are taken and then stitched together as a large panoramic image mosaic. The aim is to simulate how large the offset can be before it introduces significant error to the dataset. The method uses geometrical analysis to calculate the error in various cases. Constraints for shooting distance, focal length and the depth of the area of interest are taken into account. Considering these constraints, it is possible to safely use even poorly calibrated panoramic camera rig with noticeable offset in projection centre locations. The aim is to create datasets suited for photogrammetric reconstruction. Similar constraints can be used also for finding recommended areas from the image planes for automatic feature matching and thus improve stitching of sub-images into full panoramic mosaics. The results are mainly designed to be used with long focal length cameras where the offset of projection centre of sub-images can seem to be significant but on the other hand the shooting distance is also long. We show that in such situations the error introduced by the offset of the projection centres results only in negligible error when stitching a metric panorama. Even if the main use of the results is with cameras of long focal length, they are feasible for all focal lengths.

  14. Proton computed tomography images with algebraic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Bruzzi, M. [Physics and Astronomy Department, University of Florence, Florence (Italy); Civinini, C.; Scaringella, M. [INFN - Florence Division, Florence (Italy); Bonanno, D. [INFN - Catania Division, Catania (Italy); Brianzi, M. [INFN - Florence Division, Florence (Italy); Carpinelli, M. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Cirrone, G.A.P.; Cuttone, G. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Presti, D. Lo [INFN - Catania Division, Catania (Italy); Physics and Astronomy Department, University of Catania, Catania (Italy); Maccioni, G. [INFN – Cagliari Division, Cagliari (Italy); Pallotta, S. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Randazzo, N. [INFN - Catania Division, Catania (Italy); Romano, F. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Sipala, V. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Talamonti, C. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Vanzi, E. [Fisica Sanitaria, Azienda Ospedaliero-Universitaria Senese, Siena (Italy)

    2017-02-11

    A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to ~1% and spatial resolutions <1 mm, achieved within processing times of ~15′ for a 512×512 pixels image prove that this technique will be beneficial if used instead of X-CT in hadron-therapy.

  15. Two Generations of Path Dependence

    DEFF Research Database (Denmark)

    Madsen, Mogens Ove

      Even if there is no fully articulated and generally accepted theory of Path Dependence it has eagerly been taken up across a wide range of social sciences - primarily coming from economics. Path Dependence is most of all a metaphor that offers reason to believe, that some political, social...

  16. Chromatic roots and hamiltonian paths

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2000-01-01

    We present a new connection between colorings and hamiltonian paths: If the chromatic polynomial of a graph has a noninteger root less than or equal to t(n) = 2/3 + 1/3 (3)root (26 + 6 root (33)) + 1/3 (3)root (26 - 6 root (33)) = 1.29559.... then the graph has no hamiltonian path. This result...

  17. On Hilbert space of paths

    International Nuclear Information System (INIS)

    Exner, P.; Kolerov, G.I.

    1980-01-01

    A Hilbert space of paths, the elements of which are determined by trigonometric series, was proposed and used recently by Truman. This space is shown to consist precisely of all absolutely continuous paths ending in the origin with square-integrable derivatives

  18. Robotic Online Path Planning on Point Cloud.

    Science.gov (United States)

    Liu, Ming

    2016-05-01

    This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.

  19. Collimator changer for scintillation camera

    International Nuclear Information System (INIS)

    Jupa, E.C.; Meeder, R.L.; Richter, E.K.

    1976-01-01

    A collimator changing assembly mounted on the support structure of a scintillation camera is described. A vertical support column positioned proximate the detector support column with a plurality of support arms mounted thereon in a rotatable cantilevered manner at separate vertical positions. Each support arm is adapted to carry one of the plurality of collimators which are interchangeably mountable on the underside of the detector and to transport the collimator between a store position remote from the detector and a change position underneath said detector

  20. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  1. On camera-based smoke and gas leakage detection

    Energy Technology Data Exchange (ETDEWEB)

    Nyboe, Hans Olav

    1999-07-01

    Gas detectors are found in almost every part of industry and in many homes as well. An offshore oil or gas platform may host several hundred gas detectors. The ability of the common point and open path gas detectors to detect leakages depends on their location relative to the location of a gas cloud. This thesis describes the development of a passive volume gas detector, that is, one than will detect a leakage anywhere in the area monitored. After the consideration of several detection techniques it was decided to use an ordinary monochrome camera as sensor. Because a gas leakage may perturb the index of refraction, parts of the background appear to be displaced from their true positions, and it is necessary to develop algorithms that can deal with small differences between images. The thesis develops two such algorithms. Many image regions can be defined and several feature values can be computed for each region. The value of the features depends on the pattern in the image regions. The classes studied in this work are: reference, gas, smoke and human activity. Test show that observation belonging to these classes can be classified fairly high accuracy. The features in the feature set were chosen and developed for this particular application. Basically, the features measure the magnitude of pixel differences, size of detected phenomena and image distortion. Interesting results from many experiments are presented. Most important, the experiments show that apparent motion caused by a gas leakage or heat convection can be detected by means of a monochrome camera. Small leakages of methane can be detected at a range of about four metres. Other gases, such as butane, where the densities differ more from the density of air than the density of methane does, can be detected further from the camera. Gas leakages large enough to cause condensation have been detected at a camera distance of 20 metres. 59 refs., 42 figs., 13 tabs.

  2. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  3. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  4. Design of a Compton camera for 3D prompt-{gamma} imaging during ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Roellinghoff, F., E-mail: roelling@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Richard, M.-H., E-mail: mrichard@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Chevallier, M.; Constanzo, J.; Dauvergne, D. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Freud, N. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Henriquet, P.; Le Foulher, F. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Letang, J.M. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Montarou, G. [LPC, CNRS/IN2P3, Clermont-F. University (France); Ray, C.; Testa, E.; Testa, M. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Walenta, A.H. [Uni-Siegen, FB Physik, Emmy-Noether Campus, D-57068 Siegen (Germany)

    2011-08-21

    We investigate, by means of Geant4 simulations, a real-time method to control the position of the Bragg peak during ion therapy, based on a Compton camera in combination with a beam tagging device (hodoscope) in order to detect the prompt gamma emitted during nuclear fragmentation. The proposed set-up consists of a stack of 2 mm thick silicon strip detectors and a LYSO absorber detector. The {gamma} emission points are reconstructed analytically by intersecting the ion trajectories given by the beam hodoscope and the Compton cones given by the camera. The camera response to a polychromatic point source in air is analyzed with regard to both spatial resolution and detection efficiency. Various geometrical configurations of the camera have been tested. In the proposed configuration, for a typical polychromatic photon point source, the spatial resolution of the camera is about 8.3 mm FWHM and the detection efficiency 2.5x10{sup -4} (reconstructable photons/emitted photons in 4{pi}). Finally, the clinical applicability of our system is considered and possible starting points for further developments of a prototype are discussed.

  5. Simple method of modelling of digital holograms registering and their optical reconstruction

    International Nuclear Information System (INIS)

    Evtikhiev, N N; Cheremkhin, P A; Krasnov, V V; Kurbatova, E A; Molodtsov, D Yu; Porshneva, L A; Rodin, V G

    2016-01-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted. (paper)

  6. Hard paths, soft paths or no paths? Cross-cultural perceptions of water solutions

    Science.gov (United States)

    Wutich, A.; White, A. C.; White, D. D.; Larson, K. L.; Brewis, A.; Roberts, C.

    2014-01-01

    In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences associated with development status and, to a lesser extent, water scarcity. People in the two less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in the more developed sites. Thematically, people in the two less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community-based solutions, while people in the more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in the two water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in the water-rich sites. Thematically, people in the two water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in the water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

  7. Counting neutrons with a commercial S-CMOS camera

    Directory of Open Access Journals (Sweden)

    Patrick Van Esch

    2018-01-01

    Full Text Available It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable “neutron impact” data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has

  8. Counting neutrons with a commercial S-CMOS camera

    Science.gov (United States)

    Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega

    2018-01-01

    It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.

  9. Observations of the Perseids 2012 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.

    2012-09-01

    The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories

  10. Three-Dimensional Reconstruction Optical System Using Shadows Triangulation

    Science.gov (United States)

    Barba, J. Leiner; Vargas, Q. Lorena; Torres, M. Cesar; Mattos, V. Lorenzo

    2008-04-01

    In this work is developed a three-dimensional reconstruction system using the Shades3D tool of the Matlab® Programming Language and materials of low cost, such as webcam camera, a stick, a weak structured lighting system composed by a desk lamp, and observation plane in which the object is located. The reconstruction is obtained through a triangulation process that is executed after acquiring a sequence of images of the scene with a shadow projected on the object; additionally an image filtering process is done for obtaining only the part of the scene that will be reconstructed. Previously, it is necessary to develop a calibration process for determining the internal camera geometric and optical characteristics (intrinsic parameters), and the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters). The lamp and the stick are used to produce a shadow which scans the object; in this technique, it is not necessary to know the position of the light source, instead the triangulation is obtained using shadow plane produced by intersection between the stick and the illumination pattern. The webcam camera captures all images with the shadow scanning the object, and Shades3D tool processes all information taking into account captured images and calibration parameters. Likewise, this technique is evaluated in the reconstruction of parts of the human body and its application in the detection of external abnormalities and elaboration of prosthesis or implant.

  11. Identification of hadronic {tau} decays using the {tau} lepton flight path and reconstruction and identification of jets with a low transverse energy at intermediate luminosities with an application to the search for the Higgs boson in vector boson fusion with the ATLAS experiment at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Ruwiedel, Christoph

    2010-06-15

    Three studies of different components of the object reconstruction with the ATLAS experiment using simulated data are presented. In each study, a method for the improvement of the reconstruction is developed and the improvements are evaluated. The reconstruction of observables sensitive to the {tau} lepton lifetime and their use for the identification of hadronic {tau} decays are the subject of the first study. In the second study, a method for the identification of jets from the signal interaction in the presence of additional minimum bias interactions is developed and applied to the central jet veto in the vector boson fusion Higgs analysis. In the third study, the effects of additional minimum bias interactions close in time to the triggered event on the cluster formation and the determination of the jet energy are evaluated and the cluster formation is modified in a way such that the bias of the reconstructed jet energy is minimized. (orig.)

  12. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  13. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  14. Kinect Fusion improvement using depth camera calibration

    Science.gov (United States)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  15. Kinect Fusion improvement using depth camera calibration

    Directory of Open Access Journals (Sweden)

    D. Pagliari

    2014-06-01

    Full Text Available Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013 allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  16. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  17. Hanford Environmental Dose Reconstruction Project monthly report

    International Nuclear Information System (INIS)

    Finch, S.M.

    1991-10-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doeses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates

  18. Optimal Paths in Gliding Flight

    Science.gov (United States)

    Wolek, Artur

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  19. A METHOD FOR SELF-CALIBRATION IN SATELLITE WITH HIGH PRECISION OF SPACE LINEAR ARRAY CAMERA

    Directory of Open Access Journals (Sweden)

    W. Liu

    2016-06-01

    Full Text Available At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera’s change regulation can be mastered accurately and the camera’s attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.

  20. Adaptive algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Lu Wenkai; Yin Fangfang

    2004-01-01

    Algebraic reconstruction techniques (ART) are iterative procedures for reconstructing objects from their projections. It is proven that ART can be computationally efficient by carefully arranging the order in which the collected data are accessed during the reconstruction procedure and adaptively adjusting the relaxation parameters. In this paper, an adaptive algebraic reconstruction technique (AART), which adopts the same projection access scheme in multilevel scheme algebraic reconstruction technique (MLS-ART), is proposed. By introducing adaptive adjustment of the relaxation parameters during the reconstruction procedure, one-iteration AART can produce reconstructions with better quality, in comparison with one-iteration MLS-ART. Furthermore, AART outperforms MLS-ART with improved computational efficiency

  1. Perfect discretization of path integrals

    International Nuclear Information System (INIS)

    Steinhaus, Sebastian

    2012-01-01

    In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.

  2. Perfect discretization of path integrals

    Science.gov (United States)

    Steinhaus, Sebastian

    2012-05-01

    In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.

  3. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  4. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  5. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  6. Explosive Transient Camera (ETC) Program

    Science.gov (United States)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  7. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  8. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  9. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  10. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  11. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  12. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  13. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  14. An Introduction to Path Analysis

    Science.gov (United States)

    Wolfe, Lee M.

    1977-01-01

    The analytical procedure of path analysis is described in terms of its use in nonexperimental settings in the social sciences. The description assumes a moderate statistical background on the part of the reader. (JKS)

  15. Probabilistic simulation of fermion paths

    International Nuclear Information System (INIS)

    Zhirov, O.V.

    1989-01-01

    Permutation symmetry of fermion path integral allows (while spin degrees of freedom are ignored) to use in its simulation any probabilistic algorithm, like Metropolis one, heat bath, etc. 6 refs., 2 tabs

  16. 3D Point Cloud Reconstruction from Single Plenoptic Image

    Directory of Open Access Journals (Sweden)

    F. Murgia

    2016-06-01

    Full Text Available Novel plenoptic cameras sample the light field crossing the main camera lens. The information available in a plenoptic image must be processed, in order to create the depth map of the scene from a single camera shot. In this paper a novel algorithm, for the reconstruction of 3D point cloud of the scene from a single plenoptic image, taken with a consumer plenoptic camera, is proposed. Experimental analysis is conducted on several test images, and results are compared with state of the art methodologies. The results are very promising, as the quality of the 3D point cloud from plenoptic image, is comparable with the quality obtained with current non-plenoptic methodologies, that necessitate more than one image.

  17. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  18. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  19. Perfect discretization of path integrals

    OpenAIRE

    Steinhaus, Sebastian

    2011-01-01

    In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discu...

  20. Path integration in conical space

    International Nuclear Information System (INIS)

    Inomata, Akira; Junker, Georg

    2012-01-01

    Quantum mechanics in conical space is studied by the path integral method. It is shown that the curvature effect gives rise to an effective potential in the radial path integral. It is further shown that the radial path integral in conical space can be reduced to a form identical with that in flat space when the discrete angular momentum of each partial wave is replaced by a specific non-integral angular momentum. The effective potential is found proportional to the squared mean curvature of the conical surface embedded in Euclidean space. The path integral calculation is compatible with the Schrödinger equation modified with the Gaussian and the mean curvature. -- Highlights: ► We study quantum mechanics on a cone by the path integral approach. ► The path integral depends only on the metric and the curvature effect is built in. ► The approach is consistent with the Schrödinger equation modified by an effective potential. ► The effective potential is found to be of the “Jensen–Koppe” and “da Costa” type.

  1. Path integrals on curved manifolds

    International Nuclear Information System (INIS)

    Grosche, C.; Steiner, F.

    1987-01-01

    A general framework for treating path integrals on curved manifolds is presented. We also show how to perform general coordinate and space-time transformations in path integrals. The main result is that one has to subtract a quantum correction ΔV ∝ ℎ 2 from the classical Lagrangian L, i.e. the correct effective Lagrangian to be used in the path integral is L eff = L-ΔV. A general prescription for calculating the quantum correction ΔV is given. It is based on a canonical approach using Weyl-ordering and the Hamiltonian path integral defined by the midpoint prescription. The general framework is illustrated by several examples: The d-dimensional rotator, i.e. the motion on the sphere S d-1 , the path integral in d-dimensional polar coordinates, the exact treatment of the hydrogen atom in R 2 and R 3 by performing a Kustaanheimo-Stiefel transformation, the Langer transformation and the path integral for the Morse potential. (orig.)

  2. Path-based Queries on Trajectory Data

    DEFF Research Database (Denmark)

    Krogh, Benjamin Bjerre; Pelekis, Nikos; Theodoridis, Yannis

    2014-01-01

    In traffic research, management, and planning a number of path-based analyses are heavily used, e.g., for computing turn-times, evaluating green waves, or studying traffic flow. These analyses require retrieving the trajectories that follow the full path being analyzed. Existing path queries cannot...... sufficiently support such path-based analyses because they retrieve all trajectories that touch any edge in the path. In this paper, we define and formalize the strict path query. This is a novel query type tailored to support path-based analysis, where trajectories must follow all edges in the path...... a specific path by only retrieving data from the first and last edge in the path. To correctly answer strict path queries existing network-constrained trajectory indexes must retrieve data from all edges in the path. An extensive performance study of NETTRA using a very large real-world trajectory data set...

  3. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby...... the scanning time as well as the amount of interaction between the AFM probe and the specimen. It can easily be applied on conventional AFM hardware. Due to undersampling, it is then necessary to further process the acquired image in order to reconstruct an approximation of the image. Based on real AFM cell...... images, our simulations reveal that using a simple raster scanning pattern in combination with conventional image interpolation performs very well. Moreover, this combination enables a reduction by a factor 10 of the scanning time while retaining an average reconstruction quality around 36 dB PSNR...

  4. Observations of the Perseids 2013 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km

  5. Precise shape reconstruction by active pattern in total-internal-reflection-based tactile sensor.

    Science.gov (United States)

    Saga, Satoshi; Taira, Ryosuke; Deguchi, Koichiro

    2014-03-01

    We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.

  6. SINGLE IMAGE CAMERA CALIBRATION IN CLOSE RANGE PHOTOGRAMMETRY FOR SOLDER JOINT ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. Heinemann

    2016-06-01

    Full Text Available Printed Circuit Boards (PCB play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  7. Stereo reconstruction from multiperspective panoramas.

    Science.gov (United States)

    Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard

    2004-01-01

    A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.

  8. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  9. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  10. Compton camera study for high efficiency SPECT and benchmark with Anger system

    Science.gov (United States)

    Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.

    2017-12-01

    Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application

  11. Radiometric calibration of wide-field camera system with an application in astronomy

    Science.gov (United States)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  12. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  13. Be Foil ''Filter Knee Imaging'' NSTX Plasma with Fast Soft X-ray Camera

    International Nuclear Information System (INIS)

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-01-01

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28 o ) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip

  14. D-SPECT, a semiconductor camera: Technical aspects and clinical applications

    International Nuclear Information System (INIS)

    Merlin, C.; Bertrand, S.; Kelly, A.; Veyre, A.; Mestas, D.; Cachin, F.; Motreff, P.; Levesque, S.; Cachin, F.; Askienazy, S.

    2010-01-01

    Clinical practice in nuclear medicine has largely changed in the last decade, particularly with the arrival of PET/CT and SPECT/CT. New semiconductor cameras could represent the next evolution in our nuclear medicine practice. Due to the resolution and sensitivity improvement, this technology authorizes fast speed acquisitions, high contrast and resolution images performed with low activity injection. The dedicated cardiology D-SPECT camera (Spectrum Dynamics, Israel) is based on semiconductor technology and provides an original system for collimation and images reconstruction. We describe here our clinical experience in using the D-SPECT with a preliminary study comparing D-D.P.E.C.T. and conventional gamma camera. (authors)

  15. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  16. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  17. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  18. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  19. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  20. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  1. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  2. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  3. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  4. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...

  5. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  6. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  7. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    Energy Technology Data Exchange (ETDEWEB)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P

    2000-07-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10{sup -2} seems possible in the near future. (author)

  8. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    International Nuclear Information System (INIS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P.

    2000-01-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10 -2 seems possible in the near future. (author)

  9. STREAK CAMERA MEASUREMENTS OF THE APS PC GUN DRIVE LASER

    Energy Technology Data Exchange (ETDEWEB)

    Dooling, J. C.; Lumpkin, A. H.

    2017-06-25

    We report recent pulse-duration measurements of the APS PC Gun drive laser at both second harmonic and fourth harmonic wavelengths. The drive laser is a Nd:Glass-based chirped pulsed amplifier (CPA) operating at an IR wavelength of 1053 nm, twice frequency-doubled to obtain UV output for the gun. A Hamamatsu C5680 streak camera and an M5675 synchroscan unit are used for these measurements; the synchroscan unit is tuned to 119 MHz, the 24th subharmonic of the linac s-band operating frequency. Calibration is accomplished both electronically and optically. Electronic calibration utilizes a programmable delay line in the 119 MHz rf path. The optical delay uses an etalon with known spacing between reflecting surfaces and is coated for the visible, SH wavelength. IR pulse duration is monitored with an autocorrelator. Fitting the streak camera image projected profiles with Gaussians, UV rms pulse durations are found to vary from 2.1 ps to 3.5 ps as the IR varies from 2.2 ps to 5.2 ps.

  10. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  11. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out......In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  12. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  13. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  14. A universal multiprocessor system for the fast acquisition and processing of positron camera data

    International Nuclear Information System (INIS)

    Deluigi, B.

    1982-01-01

    In this study the main components of a suitable detection system were worked out, and their properties were examined. For the measurement of the three-dimensional distribution of radiopharmaka marked by positron emitters in animal-experimental studies first a positron camera was constructed. For the detection of the annihilation quanta serve two opposite lying position-sensitive gamma detectors which are derived in coincidence. Two commercial camera heads working according to the Anger principle were reconstructed for these purposes and switched together by a special interface to the positron camera. By this arrangement a spatial resolution of 0.8 cm FWHM for a line source in the symmetry plane and a coincidence resolution time 2T of 16ns FW0.1M was reached. For the three-dimensional image reconstruction with the data of a positron camera a maximum-likelihood procedure was developed and tested by a Monte Carlo procedure. In view of this application an at most flexible multi-microprocessor system was developed. A high computing capacity is reached owing to the fact that several partial problems are distributed to different processors and are processed parallely. The architecture was so scheduled that the system possesses a high error tolerance and that the computing capacity can be extended without a principal limit. (orig./HSI) [de

  15. Path integration on hyperbolic spaces

    Energy Technology Data Exchange (ETDEWEB)

    Grosche, C [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik

    1991-11-01

    Quantum mechanics on the hyperbolic spaces of rank one is discussed by path integration technique. Hyperbolic spaces are multi-dimensional generalisation of the hyperbolic plane, i.e. the Poincare upper half-plane endowed with a hyperbolic geometry. We evalute the path integral on S{sub 1} {approx equal} SO (n,1)/SO(n) and S{sub 2} {approx equal} SU(n,1)/S(U(1) x U(n)) in a particular coordinate system, yielding explicitly the wave-functions and the energy spectrum. Futhermore we can exploit a general property of all these spaces, namely that they can be parametrized by a pseudopolar coordinate system. This allows a separation in path integration over spheres and an additional path integration over the remaining hyperbolic coordinate, yielding effectively a path integral for a modified Poeschl-Teller potential. Only continuous spectra can exist in all the cases. For all the hyperbolic spaces of rank one we find a general formula for the largest lower bound (zero-point energy) of the spectrum which is given by E{sub O} = h{sup 2} /8m(m{sub {alpha}} +2m{sub 2} {alpha}){sup 2} (m {alpha} and m{sub 2}{alpha} denote the dimension of the root subspace corresponding to the roots {alpha} and 2{alpha}, respectively). I also discuss the case, where a constant magnetic field on H{sup n} is incorporated. (orig.).

  16. Path integration on hyperbolic spaces

    International Nuclear Information System (INIS)

    Grosche, C.

    1991-11-01

    Quantum mechanics on the hyperbolic spaces of rank one is discussed by path integration technique. Hyperbolic spaces are multi-dimensional generalisation of the hyperbolic plane, i.e. the Poincare upper half-plane endowed with a hyperbolic geometry. We evalute the path integral on S 1 ≅ SO (n,1)/SO(n) and S 2 ≅ SU(n,1)/S[U(1) x U(n)] in a particular coordinate system, yielding explicitly the wave-functions and the energy spectrum. Futhermore we can exploit a general property of all these spaces, namely that they can be parametrized by a pseudopolar coordinate system. This allows a separation in path integration over spheres and an additional path integration over the remaining hyperbolic coordinate, yielding effectively a path integral for a modified Poeschl-Teller potential. Only continuous spectra can exist in all the cases. For all the hyperbolic spaces of rank one we find a general formula for the largest lower bound (zero-point energy) of the spectrum which is given by E O = h 2 /8m(m α +2m 2 α) 2 (m α and m 2 α denote the dimension of the root subspace corresponding to the roots α and 2α, respectively). I also discuss the case, where a constant magnetic field on H n is incorporated. (orig.)

  17. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    McMakin, A.H.; Cannon, S.D.; Finch, S.M.

    1992-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon, Washington, and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates): Source terms, environmental transport, environmental monitoring data, demography, food consumption, and agriculture, and environmental pathways and dose estimates. Progress is discussed

  18. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  19. Design of Microwave Camera for Breast Cancer Detection

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy

    2008-01-01

    is then used to reconstruct an image, which consists of a spatial distribution of the complex permittivity in the imaging domain. Using this image the cancer tissue can be detected due to its dielectric property contrast compared to normal tissue. The instrument employs a multichannel high sensitive...... superheterodyne architecture, enabling parallel coherent measurements. In this way, mechanical scanning, which is commonly used in measurements of an electromagnetic field distribution, is avoided. The system presented is the first reported 3D microwave breast imaging camera with parallel signal detection....... The hardware operates in the frequency range 0.3 – 3 GHz. The noise floor is below -140 dBm over the bandwidth of the system. The dynamic range depends on the available incident power range and is limited by the channel to channel isolation of 140 dB. The work presented in this thesis encompasses a wide range...

  20. Recent developments in gamma camera technology for myocardial perfusion scintigraphy

    International Nuclear Information System (INIS)

    Bengel, Frank M.

    2010-01-01

    Economic pressure, competition from alternative modalities and an increasing awareness of patient radiation exposure have triggered a rapid development of novel technology for cardiac single-photon emission computed tomography (SPECT) in recent years. The trend clearly goes towards systems with higher sensitivity and resolution, and towards faster acquisition protocols. Those goals are achieved by various measures: On the one hand, several manufacturers have integrated novel semiconductor detector materials together with innovative collimators into dedicated cardiac scanners. On the other hand, new collimators and reconstruction algorithms have lead to increased speed and accuracy of conventional gamma cameras. Imaging times now can be reduced to as much as 10% of that of previous standard protocols, and/or injected activity can be reduced. This is achieved without loss of diagnostic accuracy. These novel developments are still in early phases of clinical implementation. Their potential for a profound change of the clinical practice of myocardial perfusion scintigraphy, however, becomes increasingly obvious. (orig.)

  1. Imaging performance of a multiwire proportional-chamber positron camera

    International Nuclear Information System (INIS)

    Perez-Mandez, V.; Del Guerra, A.; Nelson, W.R.; Tam, K.C.

    1982-08-01

    A new design - fully three-dimensional - Positron Camera is presented, made of six MultiWire Proportional Chamber modules arranged to form the lateral surface of a hexagonal prism. A true coincidence rate of 56000 c/s is expected with an equal accidental rate for a 400 μCi activity uniformly distributed in a approx. 3 l water phantom. A detailed Monte Carlo program has been used to investigate the dependence of the spatial resolution on the geometrical and physical parameters. A spatial resolution of 4.8 mm FWHM has been obtained for a 18 F point-like source in a 10 cm radius water phantom. The main properties of the limited angle reconstruction algorithms are described in relation to the proposed detector geometry

  2. Optical design of the comet Shoemaker-Levy speckle camera

    Energy Technology Data Exchange (ETDEWEB)

    Bissinger, H. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    An optical design is presented in which the Lick 3 meter telescope and a bare CCD speckle camera system was used to image the collision sites of the Shoemaker-Levy 9 comet with the Planet Jupiter. The brief overview includes of the optical constraints and system layout. The choice of a Risley prism combination to compensate for the time dependent atmospheric chromatic changes are described. Plate scale and signal-to-noise ratio curves resulting from imaging reference stars are compared with theory. Comparisons between un-corrected and reconstructed images of Jupiter`s impact sites. The results confirm that speckle imaging techniques can be used over an extended time period to provide a method to image large extended objects.

  3. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  4. Breast reconstruction - natural tissue

    Science.gov (United States)

    ... flap; TRAM; Latissimus muscle flap with a breast implant; DIEP flap; DIEAP flap; Gluteal free flap; Transverse upper gracilis flap; TUG; Mastectomy - breast reconstruction with natural tissue; Breast cancer - breast reconstruction with natural tissue

  5. Breast reconstruction after mastectomy

    Directory of Open Access Journals (Sweden)

    Daniel eSchmauss

    2016-01-01

    Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.

  6. Near Real-Time Ground-to-Ground Infrared Remote-Sensing Combination and Inexpensive Visible Camera Observations Applied to Tomographic Stack Emission Measurements

    Directory of Open Access Journals (Sweden)

    Philippe de Donato

    2018-04-01

    Full Text Available Evaluation of the environmental impact of gas plumes from stack emissions at the local level requires precise knowledge of the spatial development of the cloud, its evolution over time, and quantitative analysis of each gaseous component. With extensive developments, remote-sensing ground-based technologies are becoming increasingly relevant to such an application. The difficulty of determining the exact 3-D thickness of the gas plume in real time has meant that the various gas components are mainly expressed using correlation coefficients of gas occurrences and path concentration (ppm.m. This paper focuses on a synchronous and non-expensive multi-angled approach combining three high-resolution visible cameras (GoPro-Hero3 and a scanning infrared (IR gas system (SIGIS, Bruker. Measurements are performed at a NH3 emissive industrial site (NOVACARB Society, Laneuveville-devant-Nancy, France. Visible data images were processed by a first geometrical reconstruction gOcad® protocol to build a 3-D envelope of the gas plume which allows estimation of the plume’s thickness corresponding to the 2-D infrared grid measurements. NH3 concentration data could thereby be expressed in ppm and have been interpolated using a second gOcad® interpolation algorithm allowing a precise volume visualization of the NH3 distribution in the flue gas steam.

  7. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  8. Application of 3D reconstruction system in diabetic foot ulcer injury assessment

    Science.gov (United States)

    Li, Jun; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    To deal with the considerable deviation of transparency tracing method and digital planimetry method used in current clinical diabetic foot ulcer injury assessment, this paper proposes a 3D reconstruction system which can be used to get foot model with good quality texture, then injury assessment is done by measuring the reconstructed model. The system uses the Intel RealSense SR300 depth camera which is based on infrared structured-light as input device, the required data from different view is collected by moving the camera around the scanned object. The geometry model is reconstructed by fusing the collected data, then the mesh is sub-divided to increase the number of mesh vertices and the color of each vertex is determined using a non-linear optimization, all colored vertices compose the surface texture of the reconstructed model. Experimental results indicate that the reconstructed model has millimeter-level geometric accuracy and texture with few artificial effect.

  9. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  10. Spiral scan long object reconstruction through PI line reconstruction

    International Nuclear Information System (INIS)

    Tam, K C; Hu, J; Sourbelle, K

    2004-01-01

    The response of a point object in a cone beam (CB) spiral scan is analysed. Based on the result, a reconstruction algorithm for long object imaging in spiral scan cone beam CT is developed. A region-of-interest (ROI) of the long object is scanned with a detector smaller than the ROI, and a portion of it can be reconstructed without contamination from overlaying materials. The top and bottom surfaces of the ROI are defined by two sets of PI lines near the two ends of the spiral path. With this novel definition of the top and bottom ROI surfaces and through the use of projective geometry, it is straightforward to partition the cone beam image into regions corresponding to projections of the ROI, the overlaying objects or both. This also simplifies computation at source positions near the spiral ends, and makes it possible to reduce radiation exposure near the spiral ends substantially through simple hardware collimation. Simulation results to validate the algorithm are presented

  11. Improved depth estimation with the light field camera

    Science.gov (United States)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  12. Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras

    Science.gov (United States)

    Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich

    2018-01-01

    A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.

  13. Path Integrals in Quantum Mechanics

    International Nuclear Information System (INIS)

    Louko, J

    2005-01-01

    Jean Zinn-Justin's textbook Path Integrals in Quantum Mechanics aims to familiarize the reader with the path integral as a calculational tool in quantum mechanics and field theory. The emphasis is on quantum statistical mechanics, starting with the partition function Tr exp(-β H) and proceeding through the diffusion equation to barrier penetration problems and their semiclassical limit. The 'real time' path integral is defined via analytic continuation and used for the path-integral representation of the nonrelativistic S-matrix and its perturbative expansion. Holomorphic and Grassmannian path integrals are introduced and applied to nonrelativistic quantum field theory. There is also a brief discussion of path integrals in phase space. The introduction includes a brief historical review of path integrals, supported by a bibliography with some 40 entries. As emphasized in the introduction, mathematical rigour is not a central issue in the book. This allows the text to present the calculational techniques in a very readable manner: much of the text consists of worked-out examples, such as the quartic anharmonic oscillator in the barrier penetration chapter. At the end of each chapter there are exercises, some of which are of elementary coursework type, but the majority are more in the style of extended examples. Most of the exercises indeed include the solution or a sketch thereof. The book assumes minimal previous knowledge of quantum mechanics, and some basic quantum mechanical notation is collected in an appendix. The material has a large overlap with selected chapters in the author's thousand-page textbook Quantum Field Theory and Critical Phenomena (2002 Oxford: Clarendon). The stand-alone scope of the present work has, however, allowed a more focussed organization of this material, especially in the chapters on, respectively, holomorphic and Grassmannian path integrals. In my view the book accomplishes its aim admirably and is eminently usable as a textbook

  14. From path integrals to anyons

    International Nuclear Information System (INIS)

    Canright, G.S.

    1992-01-01

    I offer a pedagogical review of the homotopy arguments for fractional statistics in two dimensions. These arguments arise naturally in path-integral language since they necessarily consider the properties of paths rather than simply permutations. The braid group replaces the permutation group as the basic structure for quantum statistics; hence properties of the braid group on several surfaces are briefly discussed. Finally, the question of multiple (real-space) occupancy is addressed; I suggest that the ''traditional'' treatment of this question (ie, an assumption that many-anyon wavefunctions necessarily vanish for multiple occupancy) needs reexamination

  15. Isomorphisms and traversability of directed path graphs

    NARCIS (Netherlands)

    Broersma, Haitze J.; Li, Xueliang; Li, X.

    1998-01-01

    The concept of a line digraph is generalized to that of a directed path graph. The directed path graph $\\forw P_k(D)$ of a digraph $D$ is obtained by representing the directed paths on $k$ vertices of $D$ by vertices. Two vertices are joined by an arc whenever the corresponding directed paths in $D$

  16. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    Science.gov (United States)

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  17. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  18. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  19. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  20. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  1. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  2. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  3. Reconstructing building mass models from UAV images

    KAUST Repository

    Li, Minglei

    2015-07-26

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.

  4. Image reconstruction under non-Gaussian noise

    DEFF Research Database (Denmark)

    Sciacchitano, Federica

    During acquisition and transmission, images are often blurred and corrupted by noise. One of the fundamental tasks of image processing is to reconstruct the clean image from a degraded version. The process of recovering the original image from the data is an example of inverse problem. Due...... to the ill-posedness of the problem, the simple inversion of the degradation model does not give any good reconstructions. Therefore, to deal with the ill-posedness it is necessary to use some prior information on the solution or the model and the Bayesian approach. Additive Gaussian noise has been......D thesis intends to solve some of the many open questions for image restoration under non-Gaussian noise. The two main kinds of noise studied in this PhD project are the impulse noise and the Cauchy noise. Impulse noise is due to for instance the malfunctioning pixel elements in the camera sensors, errors...

  5. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  6. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  7. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  8. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  9. Optimisation of a dual head semiconductor Compton camera using Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom)], E-mail: ljh@ns.ph.liv.ac.uk; Boston, A.J.; Boston, H.C.; Cooper, R.J.; Cresswell, J.R.; Grint, A.N.; Nolan, P.J.; Oxley, D.C.; Scraggs, D.P. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom); Beveridge, T.; Gillam, J. [School of Physics and Materials Engineering, Monash University, Melbourne (Australia); Lazarus, I. [STFC Daresbury Laboratory, Warrington, Cheshire (United Kingdom)

    2009-06-01

    Conventional medical gamma-ray camera systems utilise mechanical collimation to provide information on the position of an incident gamma-ray photon. Systems that use electronic collimation utilising Compton image reconstruction techniques have the potential to offer huge improvements in sensitivity. Position sensitive high purity germanium (HPGe) detector systems are being evaluated as part of a single photon emission computed tomography (SPECT) Compton camera system. Data have been acquired from the orthogonally segmented planar SmartPET detectors, operated in Compton camera mode. The minimum gamma-ray energy which can be imaged by the current system in Compton camera configuration is 244 keV due to the 20 mm thickness of the first scatter detector which causes large gamma-ray absorption. A simulation package for the optimisation of a new semiconductor Compton camera has been developed using the Geant4 toolkit. This paper will show results of preliminary analysis of the validated Geant4 simulation for gamma-ray energies of SPECT, 141 keV.

  10. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  11. Using refraction in thick glass plates for optical path length modulation in low coherence interferometry.

    Science.gov (United States)

    Kröger, Niklas; Schlobohm, Jochen; Pösch, Andreas; Reithmeier, Eduard

    2017-09-01

    In Michelson interferometer setups the standard way to generate different optical path lengths between a measurement arm and a reference arm relies on expensive high precision linear stages such as piezo actuators. We present an alternative approach based on the refraction of light at optical interfaces using a cheap stepper motor with high gearing ratio to control the rotation of a glass plate. The beam path is examined and a relation between angle of rotation and change in optical path length is devised. As verification, an experimental setup is presented, and reconstruction results from a measurement standard are shown. The reconstructed step height from this setup lies within 1.25% of the expected value.

  12. COMPARISON BETWEEN RGB AND RGB-D CAMERAS FOR SUPPORTING LOW-COST GNSS URBAN NAVIGATION

    Directory of Open Access Journals (Sweden)

    L. Rossi

    2018-05-01

    Full Text Available A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  13. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  14. Application of Super-Resolution Image Reconstruction to Digital Holography

    Directory of Open Access Journals (Sweden)

    Zhang Shuqun

    2006-01-01

    Full Text Available We describe a new application of super-resolution image reconstruction to digital holography which is a technique for three-dimensional information recording and reconstruction. Digital holography has suffered from the low resolution of CCD sensors, which significantly limits the size of objects that can be recorded. The existing solution to this problem is to use optics to bandlimit the object to be recorded, which can cause the loss of details. Here super-resolution image reconstruction is proposed to be applied in enhancing the spatial resolution of digital holograms. By introducing a global camera translation before sampling, a high-resolution hologram can be reconstructed from a set of undersampled hologram images. This permits the recording of larger objects and reduces the distance between the object and the hologram. Practical results from real and simulated holograms are presented to demonstrate the feasibility of the proposed technique.

  15. Reconstruction dynamics of recorded holograms in photochromic glass

    Energy Technology Data Exchange (ETDEWEB)

    Mihailescu, Mona; Pavel, Eugen; Nicolae, Vasile B.

    2011-06-20

    We have investigated the dynamics of the record-erase process of holograms in photochromic glass using continuum Nd:YVO{sub 4} laser radiation ({lambda}=532 nm). A bidimensional microgrid pattern was formed and visualized in photochromic glass, and its diffraction efficiency decay versus time (during reconstruction step) gave us information (D, {Delta}n) about the diffusion process inside the material. The recording and reconstruction processes were carried out in an off-axis setup, and the images of the reconstructed object were recorded by a CCD camera. Measurements realized on reconstructed object images using holograms recorded at a different incident power laser have shown a two-stage process involved in silver atom kinetics.

  16. Digital reconstruction of Young's fringes using Fresnel transformation

    Science.gov (United States)

    Kulenovic, Rudi; Song, Yaozu; Renninger, P.; Groll, Manfred

    1997-11-01

    This paper deals with the digital numerical reconstruction of Young's fringes from laser speckle photography by means of the Fresnel-transformation. The physical model of the optical reconstruction of a specklegram is a near-field Fresnel-diffraction phenomenon which can be mathematically described by the Fresnel-transformation. Therefore, the interference phenomena can be directly calculated by a microcomputer.If additional a CCD-camera is used for specklegram recording the measurement procedure and evaluation process can be completely carried out in a digital way. Compared with conventional laser speckle photography no holographic plates, no wet development process and no optical specklegram reconstruction are needed. These advantages reveal a wide future in scientific and engineering applications. The basic principle of the numerical reconstruction is described, the effects of experimental parameters of Young's fringes are analyzed and representative results are presented.

  17. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  18. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  19. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  20. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  1. Application of iterative reconstruction in dynamic studies

    International Nuclear Information System (INIS)

    Meikle, S.R.

    1998-01-01

    Full text: The conventional approach to analysing dynamic tomographic data (SPECT or PET) is to reconstruct projections corresponding to each time interval separately and then fit a suitable tracer kinetic model to the dynamic sequence (method 1 ) . This approach assumes that the tracer distribution remains static during any given time interval and, for practical reasons, filtered back-projection (FBP) is the preferred reconstruction algorithm. However, alternative approaches exist which lend themselves to iterative algorithms, such as EM. One approach is to fit the model directly to the projection data, followed by EM reconstruction of the parameter estimates (method 2). This requires that the tracer model can be expressed as a linear function of the unknown model parameters. A third alternative is to incorporate the tracer model into the reconstruction algorithm (method 3). Such an extension was described during the early development of the EM algorithm, referred to as the EM parametric image reconstruction algorithm (EM-PIRA). We have investigated these various strategies for analysing dynamic data and their relative pros and cons. Tracer modelling was performed using a general model, referred to as spectral analysis, which makes no restriction on the number of physiological compartments and satisfies the linearity requirement of method 2. A kinetic software phantom was created and used to test the convergence and noise properties of the different approaches. In summary, method 2 is the most practical as it reduces the number of reconstructions by at least an order of magnitude and provides improved signal-to-noise ratios compared with method 1. EM-PIRA allows greater flexibility in the choice of parametric images and appears to have a regularising effect on convergence. Methods 2 and 3 are also better suited to dynamic scanning with a rotating camera, as they can potentially account for changes in tracer distribution between projections

  2. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  3. Semiotic Analysis of Canon Camera Advertisements

    OpenAIRE

    INDRAWATI, SUSAN

    2015-01-01

    Keywords: Semiotic Analysis, Canon Camera, Advertisement. Advertisement is a medium to deliver message to people with the goal to influence the to use certain products. Semiotics is applied to develop a correlation within element used in an advertisement. In this study, the writer chose the Semiotic analysis of canon camera advertisement as the subject to be analyzed using semiotic study based on Peirce's theory. Semiotic approach is employed in interpreting the sign, symbol, icon, and index ...

  4. Imaging camera with multiwire proportional chamber

    International Nuclear Information System (INIS)

    Votruba, J.

    1980-01-01

    The camera for imaging radioisotope dislocations for use in nuclear medicine or for other applications, claimed in the patent, is provided by two multiwire lattices for the x-coordinate connected to a first coincidence circuit, and by two multiwire lattices for the y-coordinate connected to a second coincidence circuit. This arrangement eliminates the need of using a collimator and increases camera sensitivity while reducing production cost. (Ha)

  5. Imaging capabilities of germanium gamma cameras

    International Nuclear Information System (INIS)

    Steidley, J.W.

    1977-01-01

    Quantitative methods of analysis based on the use of a computer simulation were developed and used to investigate the imaging capabilities of germanium gamma cameras. The main advantage of the computer simulation is that the inherent unknowns of clinical imaging procedures are removed from the investigation. The effects of patient scattered radiation were incorporated using a mathematical LSF model which was empirically developed and experimentally verified. Image modifying effects of patient motion, spatial distortions, and count rate capabilities were also included in the model. Spatial domain and frequency domain modeling techniques were developed and used in the simulation as required. The imaging capabilities of gamma cameras were assessed using low contrast lesion source distributions. The results showed that an improvement in energy resolution from 10% to 2% offers significant clinical advantages in terms of improved contrast, increased detectability, and reduced patient dose. The improvements are of greatest significance for small lesions at low contrast. The results of the computer simulation were also used to compare a design of a hypothetical germanium gamma camera with a state-of-the-art scintillation camera. The computer model performed a parametric analysis of the interrelated effects of inherent and technological limitations of gamma camera imaging. In particular, the trade-off between collimator resolution and collimator efficiency for detection of a given low contrast lesion was directly addressed. This trade-off is an inherent limitation of both gamma cameras. The image degrading effects of patient motion, camera spatial distortions, and low count rate were shown to modify the improvements due to better energy resolution. Thus, based on this research, the continued development of germanium cameras to the point of clinical demonstration is recommended

  6. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  7. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  8. Ted Irving and the Arc of APW Paths

    Science.gov (United States)

    Kent, D. V.

    2014-12-01

    Ted Irving's last two published papers neatly encapsulate his seminal contributions to the delineation of ever-important apparent polar wander (APW) paths. His final (210th) paper [Creer & Irving, 2012 Earth Sciences History] describes in detail how Ken Creer and he when still graduate students at Cambridge started to generate and assemble paleomagnetic data for the first APW path, for then only the UK; the paper was published 60 years ago and happened to be Ted's first [Creer, Irving & Runcorn, 1954 JGE]. Only 10 years later, there was already a lengthy reference list of paleomagnetic results available from most continents that had been compiled in pole lists he published in GJRAS from 1960 to 1965 and included in an appendix in his landmark book "Paleomagnetism" [Irving, 1964 Wiley] in support of wide ranging discussions of continental drift and related topics in chapters like 'Paleolatitudes and paleomeridians.' A subsequent innovation was calculating running means of poles indexed to a numerical geologic time scale [Irving, 1977 Nature], which with independent tectonic reconstructions as already for Gondwana allowed constructions of more detailed composite APW paths. His 1977 paper also coined Pangea B for an earlier albeit contentious configuration for the supercontinent that refuses to go away. Gliding over much work on APW tracks and hairpins in the Precambrian, we come to Ted's penultimate (209th) paper [Kent & Irving, 2010 JGR] in which individual poles from short-lived large igneous provinces were grouped and most sedimentary poles, many rather venerable, excluded as likely to be biased by variable degrees of inclination error. The leaner composite APW path helped to resurrect the Baja BC scenario of Cordilleran terrane motions virtually stopped in the 1980s by APW path techniques that relied on a few key but alas often badly skewed poles. The new composite APW path also revealed several major features, such as a huge polar shift of 30° in 15 Myr in the

  9. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  10. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  11. DiversePathsJ: diverse shortest paths for bioimage analysis.

    Science.gov (United States)

    Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael

    2018-02-01

    We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  12. Three-dimensional Reconstruction of Dust Particle Trajectories in the NSTX

    International Nuclear Information System (INIS)

    Boeglin, W.U.; Roquemore, A.L.; Maqueda, R.

    2009-01-01

    Highly mobile incandescent dust particles are routinely observed on NSTX using two fast cameras operating in the visible region. An analysis method to reconstruct dust particle trajectories in space using two fast cameras is presented in this paper. Position accuracies of a few millimeters depending on the particle's location have been achieved and particle velocities between 10 and 200 m/s have been observed

  13. Spreading paths in partially observed social networks

    Science.gov (United States)

    Onnela, Jukka-Pekka; Christakis, Nicholas A.

    2012-03-01

    Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.

  14. Spreading paths in partially observed social networks.

    Science.gov (United States)

    Onnela, Jukka-Pekka; Christakis, Nicholas A

    2012-03-01

    Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.

  15. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T [Teikyo University, Itabashi-ku, Tokyo (Japan); Haga, A; Saotome, N [University of Tokyo Hospital, Bunkyo-ku, Tokyo (Japan); Arai, N [Teikyo University Hospital, Itabashi-ku, Tokyo (Japan)

    2014-06-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  16. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    International Nuclear Information System (INIS)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N

    2014-01-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  17. Calibration Procedures in Mid Format Camera Setups

    Science.gov (United States)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  18. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  19. Development of a time-of-flight Compton camera prototype for online control of ion therapy and medical imaging

    International Nuclear Information System (INIS)

    Ley, Jean-Luc

    2015-01-01

    Hadron-therapy is one of the modalities available for treating cancer. This modality uses light ions (protons, carbon ions) to destroy cancer cells. Such particles have a ballistic accuracy thanks to their quasi-rectilinear trajectory, their path and the finished profile maximum dose in the end. Compared to conventional radiotherapy, this allows to spare the healthy tissue located adjacent downstream and upstream of the tumor. One of this modality's quality assurance challenges is to control the positioning of the dose deposited by ions in the patient. One possibility to perform this control is to detect the prompt gammas emitted during nuclear reactions induced along the ion path in the patient. A Compton camera prototype, theoretically allowing to maximize the detection efficiency of the prompt gammas, is being developed under a regional collaboration. This camera was the main focus of my thesis, and particularly the following points: i) studying, throughout Monte Carlo simulations, the operation of the prototype in construction, particularly with respect to the expected counting rates on the different types of accelerators in hadron-therapy ii) conducting simulation studies on the use of this camera in clinical imaging, iii) characterising the silicon detectors (scatterer) iv) confronting Geant4 simulations on the camera's response with measurements on the beam with the help of a demonstrator. As a result, the Compton camera prototype developed makes a control of the localization of the dose deposition in proton therapy to the scale of a spot possible, provided that the intensity of the clinical proton beam is reduced by a factor 200 (intensity of 10 8 protons/s). An application of the Compton camera in nuclear medicine seems to be attainable with the use of radioisotopes of an energy greater than 300 keV. These initial results must be confirmed by more realistic simulations (homogeneous and heterogeneous PMMA targets). Tests with the progressive

  20. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  1. Stochastic control with rough paths

    International Nuclear Information System (INIS)

    Diehl, Joscha; Friz, Peter K.; Gassiat, Paul

    2017-01-01

    We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).

  2. Stochastic control with rough paths

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Joscha [University of California San Diego (United States); Friz, Peter K., E-mail: friz@math.tu-berlin.de [TU & WIAS Berlin (Germany); Gassiat, Paul [CEREMADE, Université Paris-Dauphine, PSL Research University (France)

    2017-04-15

    We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).

  3. Path modeling and process control

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Rodionova, O.; Pomerantsev, A.

    2007-01-01

    and having three or more stages. The methods are applied to a process control of a multi-stage production process having 25 variables and one output variable. When moving along the process, variables change their roles. It is shown how the methods of path modeling can be applied to estimate variables...... be performed regarding the foreseeable output property y, and with respect to an admissible range of correcting actions for the parameters of the next stage. In this paper the basic principles of path modeling is presented. The mathematics is presented for processes having only one stage, having two stages...... of the next stage with the purpose of obtaining optimal or almost optimal quality of the output variable. An important aspect of the methods presented is the possibility of extensive graphic analysis of data that can provide the engineer with a detailed view of the multi-variate variation in data....

  4. Factorization-algebraization-path integration

    International Nuclear Information System (INIS)

    Inomata, A.; Wilson, R.

    1986-01-01

    The authors review the method of factorization proposed by Schroedinger of a quantum mechanical second-order linear differential equation into a product of two first-order differential operators, often referred to as ladder operators, as well as the modifications made to Schroedinger's method by Infeld and Hull. They then review the group theoretical treatments proposed by Miller of the Schroedinger-Infeld-Hull factorizations and go on to demonstrate the application of dynamical symmetry to path integral calculations. 30 references

  5. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  6. Career path for operations personnel

    International Nuclear Information System (INIS)

    Asher, J.A.

    1985-01-01

    This paper explains how selected personnel can now obtain a Bachelor of Science degree in Physics with a Nuclear Power Operations option. The program went into effect the Fall of 1984. Another program was worked out in 1982 whereby students attending the Nuclear Operators Training Program could obtain an Associates of Science degree in Mechanical Engineering Technology at the end of two years of study. This paper presents tables and charts which describe these programs and outline the career path for operators

  7. Conditionally solvable path integral problems

    International Nuclear Information System (INIS)

    Grosche, C.

    1995-05-01

    Some specific conditionally exactly solvable potentials are discussed within the path integral formalism. They generalize the usually known potentials by the incorporation of a fractional power behaviour and strongly anharmonic terms. We find four different kinds of such potentials, the first is related to the Coulomb potential, the second is an anharmonic confinement potential, and the third and the fourth are related to the Manning-Rosen potential. (orig.)

  8. Path integrals in curvilinear coordinates

    International Nuclear Information System (INIS)

    Prokhorov, L.V.

    1984-01-01

    Integration limits are studied for presenting the path integral curvilinear coordinates. For spherical (and topoloqically equivalent) coordinates it is shown that in formulas involving classical action in the exponent integration over all variables should be carried out within infinite limits. Another peculiarity is associated with appearance of the operator q which provides a complete definition of the wave functions out of the physical region. arguments are given upporting the validity of the cited statament in the general case

  9. Polygonal-path approximations on the path spaces of quantum-mechanical systems: properties of the polygonal paths

    International Nuclear Information System (INIS)

    Exner, P.; Kolerov, G.I.

    1981-01-01

    Properties of the subset of polygonal paths in the Hilbert space H of paths referring to a d-dimensional quantum-mechanical system are examined. Using the reproduction kernel technique we prove that each element of H is approximated by polygonal paths uniformly with respect to the ''norm'' of time-interval partitions. This result will be applied in the second part of the present paper to prove consistency of the uniform polygonal-path extension of the Feynman maps [ru

  10. Path Integrals in Quantum Mechanics

    International Nuclear Information System (INIS)

    Chetouani, L

    2005-01-01

    By treating path integrals the author, in this book, places at the disposal of the reader a modern tool for the comprehension of standard quantum mechanics. Thus the most important applications, such as the tunnel effect, the diffusion matrix, etc, are presented from an original point of view on the action S of classical mechanics while having it play a central role in quantum mechanics. What also emerges is that the path integral describes these applications more richly than are described traditionally by differential equations, and consequently explains them more fully. The book is certainly of high quality in all aspects: original in presentation, rigorous in the demonstrations, judicious in the choice of exercises and, finally, modern, for example in the treatment of the tunnel effect by the method of instantons. Moreover, the correspondence that exists between classical and quantum mechanics is well underlined. I thus highly recommend this book (the French version being already available) to those who wish to familiarize themselves with formulation by path integrals. They will find, in addition, interesting topics suitable for exploring further. (book review)

  11. Nonperturbative path integral expansion II

    International Nuclear Information System (INIS)

    Kaiser, H.J.

    1976-05-01

    The Feynman path integral representation of the 2-point function for a self-interacting Bose field is investigated using an expansion ('Path Integral Expansion', PIE) of the exponential of the kinetic term of the Lagrangian. This leads to a series - illustrated by a graph scheme - involving successively a coupling of more and more points of the lattice space commonly employed in the evaluation of path integrals. The values of the individual PIE graphs depend of course on the lattice constant. Two methods - Pade approximation and Borel-type extrapolation - are proposed to extract information about the continuum limit from a finite-order PIE. A more flexible PIE is possible by expanding besides the kinetic term a suitably chosen part of the interaction term too. In particular, if the co-expanded part is a mass term the calculation becomes only slightly more complicated than in the original formulation and the appearance of the graph scheme is unchanged. A significant reduction of the number of graphs and an improvement of the convergence of the PIE can be achieved by performing certain sums over an infinity of graph elements. (author)

  12. Distribution definition of path integrals

    International Nuclear Information System (INIS)

    Kerler, W.

    1979-01-01

    By starting from quantum mechanics it turns out that a rather general definition of quantum functional integrals can be given which is based on distribution theory. It applies also to curved space and provides clear rules for non-linear transformations. The refinements necessary in usual definitions of path integrals are pointed out. Since the quantum nature requires special care with time sequences, it is not the classical phase space which occurs in the phase-space form of the path integral. Feynman's configuration-space form only applies to a highly specialized situation, and therefore is not a very advantageous starting point for general investigations. It is shown that the commonly used substitutions of variables do not properly account for quantum effects. The relation to the traditional ordering problem is clarified. The distribution formulation has allowed to treat constrained systems directly at the quantum level, to complete the path integral formulation of the equivalence theorem, and to define functional integrals also for space translation after the transition to fields. (orig.)

  13. Reconstructing the 1935 Columbia River Gorge: A Topographic and Orthophoto Experiment

    Science.gov (United States)

    Fonstad, M. A.; Major, J. H.; O'Connor, J. E.; Dietrich, J. T.

    2017-12-01

    The last decade has seen a revolution in the mapping of rivers and near-river environments. Much of this has been associated with a new type of photogrammetry: structure from motion (SfM) and multi-view stereo techniques. Through SfM, 3D surfaces are reconstructed from nonstructured image groups with poorly calibrated cameras whose locations need not be known. Modern SfM imaging is greatly improved by careful flight planning, well-planned ground control or high-precision direct georeferencing, and well-understood camera optics. The ease of SfM, however, begs the question: how well does it work on archival photos taken without the foreknowledge of SfM techniques? In 1935, the Army Corps of Engineers took over 800 vertical aerial photos for a 160-km-long stretch of the Columbia River Gorge and adjacent areas in Oregon and Washington. These photos pre-date completion of three hydroelectric dams and reservoirs in this reach, and thus provide rich information on the historic pre-dam riverine, geologic, and cultural environments. These photos have little to no metadata associated with them, such as camera calibration reports, so traditional photogrammetry techniques are exceeding difficult to apply. Instead, we apply SfM to these archival photos, and test the resulting digital elevation model (DEM) against lidar data for features inferred to be unchanged in the past 80 years. Few, if any, of the quality controls recommended for SfM are available for these 1935 photos; they are scanned paper positives with little overlap taken with an unknown optical system in high altitude flight paths. Nevertheless, in almost all areas, the SfM analysis produced a high quality orthophoto of the gorge with low horizontal errors - most better than a few meters. The DEM created looks highly realistic, and in many areas has a vertical error of a few meters. However, the vertical errors are spatially inconsistent, with some wildly large, likely because of the many poorly constrained links in

  14. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  15. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  16. Hanford Environmental Dose Reconstruction Project monthly report

    International Nuclear Information System (INIS)

    Finch, S.M.

    1990-12-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have been have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; and environmental pathways and dose estimates. 3 figs., 3 tabs

  17. On the use of holographic interferometry in deformation analysis with additional degrees of freedom during the reconstruction

    International Nuclear Information System (INIS)

    Schumann, W.; Dubas, M.

    1978-01-01

    In holographic interferometry with two reconstructed wavefronts, additional flexibility is obtained when the fringes may be modified. This is achieved during the reconstruction by moving part of the optical arrangement. First, the equation relating the optical path difference to the displacement and the fringe control parameters and, secondly, those relating the derivative of the optical path difference to the strain and the rotation are worked out and their practical use is discussed. (orig.) [de

  18. Construct and face validity of a virtual reality-based camera navigation curriculum.

    Science.gov (United States)

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Cooperative path planning of unmanned aerial vehicles

    CERN Document Server

    Tsourdos, Antonios; Shanmugavel, Madhavan

    2010-01-01

    An invaluable addition to the literature on UAV guidance and cooperative control, Cooperative Path Planning of Unmanned Aerial Vehicles is a dedicated, practical guide to computational path planning for UAVs. One of the key issues facing future development of UAVs is path planning: it is vital that swarm UAVs/ MAVs can cooperate together in a coordinated manner, obeying a pre-planned course but able to react to their environment by communicating and cooperating. An optimized path is necessary in order to ensure a UAV completes its mission efficiently, safely, and successfully. Focussing on the path planning of multiple UAVs for simultaneous arrival on target, Cooperative Path Planning of Unmanned Aerial Vehicles also offers coverage of path planners that are applicable to land, sea, or space-borne vehicles. Cooperative Path Planning of Unmanned Aerial Vehicles is authored by leading researchers from Cranfield University and provides an authoritative resource for researchers, academics and engineers working in...

  20. Electron Inelastic-Mean-Free-Path Database

    Science.gov (United States)

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  1. Time optimal paths for high speed maneuvering

    Energy Technology Data Exchange (ETDEWEB)

    Reister, D.B.; Lenhart, S.M.

    1993-01-01

    Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature of the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.

  2. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  3. Classical and quantum dynamics from classical paths to path integrals

    CERN Document Server

    Dittrich, Walter

    2016-01-01

    Graduate students who want to become familiar with advanced computational strategies in classical and quantum dynamics will find here both the fundamentals of a standard course and a detailed treatment of the time-dependent oscillator, Chern-Simons mechanics, the Maslov anomaly and the Berry phase, to name a few. Well-chosen and detailed examples illustrate the perturbation theory, canonical transformations, the action principle and demonstrate the usage of path integrals. This new edition has been revised and enlarged with chapters on quantum electrodynamics, high energy physics, Green’s functions and strong interaction.

  4. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  5. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  6. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  7. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  8. A wide field X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.; Turner, M.J.L.; Willingale, R.

    1980-01-01

    A wide field of view X-ray camera based on the Dicke or Coded Mask principle is described. It is shown that this type of instrument is more sensitive than a pin-hole camera, or than a scanning survey of a given region of sky for all wide field conditions. The design of a practical camera is discussed and the sensitivity and performance of the chosen design are evaluated by means of computer simulations. The Wiener Filter and Maximum Entropy methods of deconvolution are described and these methods are compared with each other and cross-correlation using data from the computer simulations. It is shown that the analytic expressions for sensitivity used by other workers are confirmed by the simulations, and that ghost images caused by incomplete coding can be substantially eliminated by the use of the Wiener Filter and the Maximum Entropy Method, with some penalty in computer time for the latter. The cyclic mask configuration is compared with the simple mask camera. It is shown that when the diffuse X-ray background dominates, the simple system is more sensitive and has the better angular resolution. When sources dominate the simple system is less sensitive. It is concluded that the simple coded mask camera is the best instrument for wide field imaging of the X-ray sky. (orig.)

  9. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  10. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  11. Wired and Wireless Camera Triggering with Arduino

    Science.gov (United States)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  12. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Oates, L.; Bibbo, G.

    2000-01-01

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  13. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  14. The effect of truncation on very small cardiac SPECT camera systems

    International Nuclear Information System (INIS)

    Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.

    2006-01-01

    Background: The limited transaxial field-of-view (FOV) of a very small cardiac SPECT camera system causes view-dependent truncation of the projection of structures exterior to, but near the heart. Basic tomographic principles suggest that the reconstruction of non-attenuated truncated data gives a distortion-free image in the interior of the truncated region, but the DC term of the Fourier spectrum of the reconstructed image is incorrect, meaning that the intensity scale of the reconstruction is inaccurate. The purpose of this study was to characterize the reconstructed image artifacts from truncated data, and to quantify their effects on the measurement of tracer uptake in the myocardial. Particular attention was given to instances where the heart wall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heart region. Truncated and non-truncated projections were formed both with and without attenuation. The reconstructions were analyzed for artifacts in the myocardium caused by truncation, and for the effect that attenuation has relative to increasing those artifacts. Results: The inaccuracy due to truncation is primarily caused by an incorrect DC component. For visualizing the left ventricular wall, this error is not worse than the effect of attenuation. The addition of a small hot bowel-like structure near the left ventricle causes few changes in counts on the wall. Larger artifacts due to the truncation are located at the boundary of the truncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstruction results than an analytical filtered back-projection reconstruction algorithm. Conclusion: Small inaccuracies in reconstructed images from small FOV camera systems should have little effect on clinical interpretation. However, changes in the degree of inaccuracy in counts from slice to slice are due to changes in

  15. Path Integral Formulation of Anomalous Diffusion Processes

    OpenAIRE

    Friedrich, Rudolf; Eule, Stephan

    2011-01-01

    We present the path integral formulation of a broad class of generalized diffusion processes. Employing the path integral we derive exact expressions for the path probability densities and joint probability distributions for the class of processes under consideration. We show that Continuous Time Random Walks (CTRWs) are included in our framework. A closed expression for the path probability distribution of CTRWs is found in terms of their waiting time distribution as the solution of a Dyson ...

  16. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  17. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  18. The eye of the camera: effects of security cameras on pro-social behavior

    NARCIS (Netherlands)

    van Rompay, T.J.L.; Vonk, D.J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  19. Development of the geoCamera, a System for Mapping Ice from a Ship

    Science.gov (United States)

    Arsenault, R.; Clemente-Colon, P.

    2012-12-01

    The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.

  20. Reflector construction by sound path curves - A method of manual reflector evaluation in the field

    International Nuclear Information System (INIS)

    Siciliano, F.; Heumuller, R.

    1985-01-01

    In order to describe the time-of-flight behavior of various reflectors we have set up models and derived from them analytical and graphic approaches to reflector reconstruction. In the course of this work, maximum achievable accuracy and possible simplifications were investigated. The aim of the time-of-flight reconstruction method is to determine the points of a reflector on the basis of a sound path function (sound path as the function of the probe index position). This method can only be used on materials which are isotropic in terms of sound velocity since the method relies on time of flight being converted into sound path. This paper deals only with two-dimensional reconstruction, in other words all statements relate to the plane of incidence. The method is based on the fact that the geometrical location of the points equidistant from a certain probe index position is a circle. If circles with radiuses equal to the associated sound path are drawn for various search unit positions the points of intersection of the circles are the desired reflector points

  1. Partial Path Column Generation for the ESPPRC

    DEFF Research Database (Denmark)

    Jepsen, Mads Kehlet; Petersen, Bjørn

    This talk introduces a decomposition of the Elementary Shortest Path Problem with Resource Constraints(ESPPRC), where the path is combined by smaller sub paths. We show computational result by comparing different approaches for the decomposition and compare the best of these with existing algorit...

  2. Strain path dependency in metal plasticity

    NARCIS (Netherlands)

    Viatkina, E.M.; Brekelmans, W.A.M.; Geers, M.G.D.

    2003-01-01

    A change in strain path has a significant effect on the mechanical response of metals. Strain path change effects physically originate from a complex microstructure evolution. This paper deals with the contribution of cell structure evolution to the strain path change effect. The material with cells

  3. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  4. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  5. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  6. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    Science.gov (United States)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  7. Update on orbital reconstruction.

    Science.gov (United States)

    Chen, Chien-Tzung; Chen, Yu-Ray

    2010-08-01

    Orbital trauma is common and frequently complicated by ocular injuries. The recent literature on orbital fracture is analyzed with emphasis on epidemiological data assessment, surgical timing, method of approach and reconstruction materials. Computed tomographic (CT) scan has become a routine evaluation tool for orbital trauma, and mobile CT can be applied intraoperatively if necessary. Concomitant serious ocular injury should be carefully evaluated preoperatively. Patients presenting with nonresolving oculocardiac reflex, 'white-eyed' blowout fracture, or diplopia with a positive forced duction test and CT evidence of orbital tissue entrapment require early surgical repair. Otherwise, enophthalmos can be corrected by late surgery with a similar outcome to early surgery. The use of an endoscope-assisted approach for orbital reconstruction continues to grow, offering an alternative method. Advances in alloplastic materials have improved surgical outcome and shortened operating time. In this review of modern orbital reconstruction, several controversial issues such as surgical indication, surgical timing, method of approach and choice of reconstruction material are discussed. Preoperative fine-cut CT image and thorough ophthalmologic examination are key elements to determine surgical indications. The choice of surgical approach and reconstruction materials much depends on the surgeon's experience and the reconstruction area. Prefabricated alloplastic implants together with image software and stereolithographic models are significant advances that help to more accurately reconstruct the traumatized orbit. The recent evolution of orbit reconstruction improves functional and aesthetic results and minimizes surgical complications.

  8. Multi-view collimators for scintillation cameras

    International Nuclear Information System (INIS)

    Hatton, J.; Grenier, R.P.

    1982-01-01

    This patent specification describes a collimator for obtaining multiple images of a portion of a body with a scintillation camera comprises a body of radiation-impervious material defining two or more groups of channels each group comprising a plurality of parallel channels having axes intersecting the portion of the body being viewed on one side of the collimator and intersecting the input surface of the camera on the other side of the collimator to produce a single view of said body, a number of different such views of said body being provided by each of said groups of channels, each axis of each channel lying in a plane approximately perpendicular to the plane of the input surface of the camera and all of such planes containing said axes being approximately parallel to each other. (author)

  9. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    1980-01-01

    The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)

  10. Gate Simulation of a Gamma Camera

    International Nuclear Information System (INIS)

    Abidi, Sana; Mlaouhi, Zohra

    2008-01-01

    Medical imaging is a very important diagnostic because it allows for an exploration of the internal human body. The nuclear imaging is an imaging technique used in the nuclear medicine. It is to determine the distribution in the body of a radiotracers by detecting the radiation it emits using a detection device. Two methods are commonly used: Single Photon Emission Computed Tomography (SPECT) and the Positrons Emission Tomography (PET). In this work we are interested on modelling of a gamma camera. This simulation is based on Monte-Carlo language and in particular Gate simulator (Geant4 Application Tomographic Emission). We have simulated a clinical gamma camera called GAEDE (GKS-1) and then we validate these simulations by experiments. The purpose of this work is to monitor the performance of these gamma camera and the optimization of the detector performance and the the improvement of the images quality. (Author)

  11. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  12. Camera systems for crash and hyge testing

    Science.gov (United States)

    Schreppers, Frederik

    1995-05-01

    Since the beginning of the use of high speed cameras for crash and hyge- testing substantial changements have taken place. Both the high speed cameras and the electronic control equipment are more sophisticated nowadays. With regard to high speed equipment, a short historical retrospective view will show that concerning high speed cameras, the improvements are mainly concentrated in design details, where as the electronic control equipment has taken full advantage of the rapid progress in electronic and computer technology in the course of the last decades. Nowadays many companies and institutes involved in crash and hyge-testing wish to perform this testing, as far as possible, as an automatic computer controlled routine in order to maintain and improve security and quality. By means of several in practice realize solutions, it will be shown how their requirements could be met.

  13. Temperature measurement with industrial color camera devices

    Science.gov (United States)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  14. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    1975-01-01

    A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  15. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  16. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  17. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  18. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  19. Phase camera experiment for Advanced Virgo

    International Nuclear Information System (INIS)

    Agatsuma, Kazuhiro; Beuzekom, Martin van; Schaaf, Laura van der; Brand, Jo van den

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO 2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  20. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.